manual database up gradation from 9

96
Manual Database up gradation from 9.2.0 to 10.1.0 Filed under: Upgradation from 9.2.0 to 10.1.0 Manual Database up gradation from 9.2.0 to 10.1.0 in Same server Step : 1 Pre-request in the 9i Database. SQL> select name from v$database; NAME ——— TEST SQL> select count(*) from dba_objects; COUNT(*) ———- 29511 SQL> @C:\oracle\ora92\rdbms\admin\utlrp.sql PL/SQL procedure successfully completed. Table created. Table created. Table created. Index created. Table created. Table created. View created. View created.

Upload: prahlad-kumar-sharma

Post on 10-Mar-2015

117 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Manual Database Up Gradation From 9

Manual Database up gradation from 920 to 1010

Filed under Upgradation from 920 to 1010

Manual Database up gradation from 920 to 1010 in Same server

Step 1

Pre-request in the 9i Database

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select count() from dba_objects

COUNT()

mdashmdashmdash-

29511

SQLgt Coracleora92rdbmsadminutlrpsql

PLSQL procedure successfully completed

Table created

Table created

Table created

Index created

Table created

Table created

View created

View created

Package created

No errors

Package body created

No errors

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects

COUNT()

mdashmdashmdash-

29511

SQLgt select count()object_name from dba_objects where status=rsquoINVALID lsquo GROUP BY OBJECT_NAME

no rows selected

Spool the output of the below query and do the modification as mentioned after backing up the DB

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101isql

Oracle Database 101 Upgrade Information Tool 08-22-2009 212958

Database

mdashmdashmdash

ndashgt name TEST

ndashgt version 92010

ndashgt compatibility 92000

Logfiles [make adjustments in the current environment]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

ndash The existing log files are adequate No changes are required

Tablespaces [make adjustments in the current environment]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

ndashgt SYSTEM tablespace is adequate for the upgrade

hellip owner SYS

hellip minimum required size 577 MB

ndashgt CWMLITE tablespace is adequate for the upgrade

hellip owner OLAPSYS

hellip minimum required size 9 MB

ndashgt DRSYS tablespace is adequate for the upgrade

hellip owner CTXSYS

hellip minimum required size 10 MB

ndashgt ODM tablespace is adequate for the upgrade

hellip owner ODM

hellip minimum required size 9 MB

ndashgt XDB tablespace is adequate for the upgrade

hellip owner XDB

hellip minimum required size 48 MB

Options [present in existing database]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash

ndashgt Partitioning

ndashgt Spatial

ndashgt OLAP

ndashgt Oracle Data Mining

WARNING Listed option(s) must be installed with Oracle Database 101

Update Parameters [Update Oracle Database 101 initora or spfile]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

WARNING ndashgt ldquoshared_pool_sizerdquo needs to be increased to at least ldquo150944944Prime

ndashgt ldquopga_aggregate_targetrdquo is already at ldquo25165824Prime calculated new value is

ldquo25165824Prime

ndashgt ldquolarge_pool_sizerdquo is already at ldquo8388608Prime calculated new value is ldquo8388608Prime

WARNING ndashgt ldquojava_pool_sizerdquo needs to be increased to at least ldquo50331648Prime

Deprecated Parameters [Update Oracle Database 101 initora or spfile]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndash No deprecated parameters found No changes are required

Obsolete Parameters [Update Oracle Database 101 initora or spfile]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash

ndashgt ldquohash_join_enabledrdquo

ndashgt ldquolog_archive_startrdquo

Components [The following database components will be upgraded or installed]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndashgt Oracle Catalog Views [upgrade] VALID

ndashgt Oracle Packages and Types [upgrade] VALID

ndashgt JServer JAVA Virtual Machine [upgrade] VALID

hellipThe lsquoJServer JAVA Virtual Machinersquo JAccelerator (NCOMP)

hellipis required to be installed from the 10g Companion CD

hellip

ndashgt Oracle XDK for Java [upgrade] VALID

ndashgt Oracle Java Packages [upgrade] VALID

ndashgt Oracle XML Database [upgrade] VALID

ndashgt Oracle Workspace Manager [upgrade] VALID

ndashgt Oracle Data Mining [upgrade]

ndashgt OLAP Analytic Workspace [upgrade]

ndashgt OLAP Catalog [upgrade]

ndashgt Oracle OLAP API [upgrade]

ndashgt Oracle interMedia [upgrade]

hellipThe lsquoOracle interMedia Image Acceleratorrsquo is

helliprequired to be installed from the 10g Companion CD

hellip

ndashgt Spatial [upgrade]

ndashgt Oracle Text [upgrade] VALID

ndashgt Oracle Ultra Search [upgrade] VALID

SYSAUX Tablespace [Create tablespace in Oracle Database 101 environment]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndashgt New ldquoSYSAUXrdquo tablespace

hellip minimum required size for database upgrade 500 MB

Please create the new SYSAUX Tablespace AFTER the Oracle Database

101 server is started and BEFORE you invoke the upgrade script

Oracle Database 10g Changes in Default Behavior

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash

This page describes some of the changes in the behavior of Oracle

Database 10g from that of previous releases In some cases the

default values of some parameters have changed In other cases

new behaviorsrequirements have been introduced that may affect

current scripts or applications More detailed information is in

the documentation

SQL OPTIMIZER

The Cost Based Optimizer (CBO) is now enabled by default

Rule-based optimization is not supported in 10g (setting

OPTIMIZER_MODE to RULE or CHOOSE is not supported) See Chapter

12 ldquoIntroduction to the Optimizerrdquo in Oracle Database

Performance Tuning Guide

Collection of optimizer statistics is now performed by default

automatically for all schemas (including SYS) for pre-existing

databases upgraded to 10g and for newly created 10g databases

Gathering optimizer statistics on stale objects is scheduled by

default to occur daily during the maintenance window See

Chapter 15 ldquoManaging Optimizer Statisticsrdquo in Oracle Performance

Tuning Guide

See the Oracle Database Upgrade Guide for changes in behavior

for the COMPUTE STATISTICS clause of CREATE INDEX and for

behavior changes in SKIP_UNUSABLE_INDEXES

UPGRADEDOWNGRADE

After upgrading to 10g the minimum supported release to

downgrade to is Oracle 9i R2 release 9203 (or later) and the

minimum value for COMPATIBLE is 920 The only supported

downgrade path is for those users who have kept COMPATIBLE=920

and have an installed 9i R2 (release 9203 or later)

executable Users upgrading to 10g from prior releases (such as

Oracle 8 Oracle 8i or 9iR1) cannot downgrade to 9i R2 unless

they first install 9i R2 When upgrading to10g by default the

database will remain at 9i R2 file format compatibility so the

on disk structures that 10g writes are compatible with 9i R2

structures this makes it possible to downgrade to 9i R2 Once

file format compatibility has been explicitly advanced to 10g

(using COMPATIBLE=10xx) it is no longer possible to downgrade

See the Oracle Database Upgrade Guide

A SYSAUX tablespace is created upon upgrade to 10g The SYSAUX

tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain

MANAGEABILITY

Database performance statistics are now collected by the

Automatic Workload Repository (AWR) database component

automatically upon upgrade to 10g and also for newly created 10g

databases This data is stored in the SYSAUX tablespace and is

used by the database for automatic generation of performance

recommendations See Chapter 5 ldquoAutomatic Performance

Statisticsrdquo in the Oracle Database Performance Tuning Guide

If you currently use Statspack for performance data gathering

see section 1 of the Statspack readme (spdoctxt in the RDBMS

ADMIN directory) for directions on using Statspack in 10g to

avoid conflict with the AWR

MEMORY

Automatic PGA Memory Management is now enabled by default

(unless PGA_AGGREGATE_TARGET is explicitly set to 0 or

WORKAREA_SIZE_POLICY is explicitly set to MANUAL)

PGA_AGGREGATE_TARGET is defaulted to 20 of the SGA size unless

explicitly set Oracle recommends tuning the value of

PGA_AGGREGATE_TARGET after upgrading See Chapter 14 of the

Oracle Database Performance Tuning Guide

Previously the number of SQL cursors cached by PLSQL was

determined by OPEN_CURSORS In 10g the number of cursors cached

is determined by SESSION_CACHED_CURSORS See the Oracle Database

Reference manual

SHARED_POOL_SIZE must increase to include the space needed for

shared pool overhead

The default value of DB_BLOCK_SIZE is operating system

specific but is typically 8KB (was typically 2KB in previous

releases)

TRANSACTIONSPACE

Dropped objects are now moved to the recycle bin where the

space is only reused when it is needed This allows lsquoundroppingrsquo

a table using the FLASHBACK DROP feature See Chapter 14 of the

Oracle Database Administratorrsquos Guide

Auto tuning undo retention is on by default For more

information see Chapter 10 ldquoManaging the Undo Tablespacerdquo in

the Oracle Database Administratorrsquos Guide

CREATE DATABASE

In addition to the SYSTEM tablespace a SYSAUX tablespace is

always created at database creation and upon upgrade to 10g The

SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain See Chapter 2

ldquoCreating a Databaserdquo in the Oracle Database Administratorrsquos

Guide

In 10g by default all new databases are created with 10g file

format compatibility This means you can immediately use all the

10g features Once a database uses 10g compatible file formats

it is not possible to downgrade this database to prior releases

Minimum and default logfile sizes are larger Minimum is now 4

MB default is 50MB unless you are using Oracle Managed Files

(OMF) when it is 100 MB

PLSQL procedure successfully completed

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination Coracleoradatatestarchive

Oldest online log sequence 91

Next log sequence to archive 93

Current log sequence 93

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Backup complete database (Cold backup)

Step 2

Check the space needed and stop the listner and delete the sid

CDocuments and SettingsAdministratorgtset oracle_sid=test

CDocuments and SettingsAdministratorgtsqlplus nolog

SQLPlus Release 92010 ndash Production on Sat Aug 22 213652 2009

Copyright (c) 1982 2002 Oracle Corporation All rights reserved

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

Database mounted

Database opened

SQLgt desc sm$ts_avail

Name Null Type

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashndash mdashmdashmdashmdashmdashmdashmdashmdashmdash-

TABLESPACE_NAME VARCHAR2(30)

BYTES NUMBER

SQLgt select from sm$ts_avail

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 20971520

DRSYS 20971520

EXAMPLE 155975680

INDX 26214400

ODM 20971520

SYSTEM 419430400

TOOLS 10485760

UNDOTBS1 209715200

USERS 26214400

XDB 39976960

10 rows selected

SQLgt select from sm$ts_used

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 9764864

DRSYS 10092544

EXAMPLE 155779072

ODM 9699328

SYSTEM 414908416

TOOLS 6291456

UNDOTBS1 9814016

XDB 39714816

8 rows selected

SQLgt select from sm$ts_free

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 11141120

DRSYS 10813440

EXAMPLE 131072

INDX 26148864

ODM 11206656

SYSTEM 4456448

TOOLS 4128768

UNDOTBS1 199753728

USERS 26148864

XDB 196608

10 rows selected

SQLgt ho LSNRCTL

LSNRCTLgt start

Starting tnslsnr please waithellip

Failed to open service ltOracleoracleTNSListenergt error 1060

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220000

Uptime 0 days 0 hr 0 min 16 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 2: Manual Database Up Gradation From 9

Package body created

No errors

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects

COUNT()

mdashmdashmdash-

29511

SQLgt select count()object_name from dba_objects where status=rsquoINVALID lsquo GROUP BY OBJECT_NAME

no rows selected

Spool the output of the below query and do the modification as mentioned after backing up the DB

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101isql

Oracle Database 101 Upgrade Information Tool 08-22-2009 212958

Database

mdashmdashmdash

ndashgt name TEST

ndashgt version 92010

ndashgt compatibility 92000

Logfiles [make adjustments in the current environment]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

ndash The existing log files are adequate No changes are required

Tablespaces [make adjustments in the current environment]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

ndashgt SYSTEM tablespace is adequate for the upgrade

hellip owner SYS

hellip minimum required size 577 MB

ndashgt CWMLITE tablespace is adequate for the upgrade

hellip owner OLAPSYS

hellip minimum required size 9 MB

ndashgt DRSYS tablespace is adequate for the upgrade

hellip owner CTXSYS

hellip minimum required size 10 MB

ndashgt ODM tablespace is adequate for the upgrade

hellip owner ODM

hellip minimum required size 9 MB

ndashgt XDB tablespace is adequate for the upgrade

hellip owner XDB

hellip minimum required size 48 MB

Options [present in existing database]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash

ndashgt Partitioning

ndashgt Spatial

ndashgt OLAP

ndashgt Oracle Data Mining

WARNING Listed option(s) must be installed with Oracle Database 101

Update Parameters [Update Oracle Database 101 initora or spfile]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

WARNING ndashgt ldquoshared_pool_sizerdquo needs to be increased to at least ldquo150944944Prime

ndashgt ldquopga_aggregate_targetrdquo is already at ldquo25165824Prime calculated new value is

ldquo25165824Prime

ndashgt ldquolarge_pool_sizerdquo is already at ldquo8388608Prime calculated new value is ldquo8388608Prime

WARNING ndashgt ldquojava_pool_sizerdquo needs to be increased to at least ldquo50331648Prime

Deprecated Parameters [Update Oracle Database 101 initora or spfile]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndash No deprecated parameters found No changes are required

Obsolete Parameters [Update Oracle Database 101 initora or spfile]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash

ndashgt ldquohash_join_enabledrdquo

ndashgt ldquolog_archive_startrdquo

Components [The following database components will be upgraded or installed]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndashgt Oracle Catalog Views [upgrade] VALID

ndashgt Oracle Packages and Types [upgrade] VALID

ndashgt JServer JAVA Virtual Machine [upgrade] VALID

hellipThe lsquoJServer JAVA Virtual Machinersquo JAccelerator (NCOMP)

hellipis required to be installed from the 10g Companion CD

hellip

ndashgt Oracle XDK for Java [upgrade] VALID

ndashgt Oracle Java Packages [upgrade] VALID

ndashgt Oracle XML Database [upgrade] VALID

ndashgt Oracle Workspace Manager [upgrade] VALID

ndashgt Oracle Data Mining [upgrade]

ndashgt OLAP Analytic Workspace [upgrade]

ndashgt OLAP Catalog [upgrade]

ndashgt Oracle OLAP API [upgrade]

ndashgt Oracle interMedia [upgrade]

hellipThe lsquoOracle interMedia Image Acceleratorrsquo is

helliprequired to be installed from the 10g Companion CD

hellip

ndashgt Spatial [upgrade]

ndashgt Oracle Text [upgrade] VALID

ndashgt Oracle Ultra Search [upgrade] VALID

SYSAUX Tablespace [Create tablespace in Oracle Database 101 environment]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndashgt New ldquoSYSAUXrdquo tablespace

hellip minimum required size for database upgrade 500 MB

Please create the new SYSAUX Tablespace AFTER the Oracle Database

101 server is started and BEFORE you invoke the upgrade script

Oracle Database 10g Changes in Default Behavior

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash

This page describes some of the changes in the behavior of Oracle

Database 10g from that of previous releases In some cases the

default values of some parameters have changed In other cases

new behaviorsrequirements have been introduced that may affect

current scripts or applications More detailed information is in

the documentation

SQL OPTIMIZER

The Cost Based Optimizer (CBO) is now enabled by default

Rule-based optimization is not supported in 10g (setting

OPTIMIZER_MODE to RULE or CHOOSE is not supported) See Chapter

12 ldquoIntroduction to the Optimizerrdquo in Oracle Database

Performance Tuning Guide

Collection of optimizer statistics is now performed by default

automatically for all schemas (including SYS) for pre-existing

databases upgraded to 10g and for newly created 10g databases

Gathering optimizer statistics on stale objects is scheduled by

default to occur daily during the maintenance window See

Chapter 15 ldquoManaging Optimizer Statisticsrdquo in Oracle Performance

Tuning Guide

See the Oracle Database Upgrade Guide for changes in behavior

for the COMPUTE STATISTICS clause of CREATE INDEX and for

behavior changes in SKIP_UNUSABLE_INDEXES

UPGRADEDOWNGRADE

After upgrading to 10g the minimum supported release to

downgrade to is Oracle 9i R2 release 9203 (or later) and the

minimum value for COMPATIBLE is 920 The only supported

downgrade path is for those users who have kept COMPATIBLE=920

and have an installed 9i R2 (release 9203 or later)

executable Users upgrading to 10g from prior releases (such as

Oracle 8 Oracle 8i or 9iR1) cannot downgrade to 9i R2 unless

they first install 9i R2 When upgrading to10g by default the

database will remain at 9i R2 file format compatibility so the

on disk structures that 10g writes are compatible with 9i R2

structures this makes it possible to downgrade to 9i R2 Once

file format compatibility has been explicitly advanced to 10g

(using COMPATIBLE=10xx) it is no longer possible to downgrade

See the Oracle Database Upgrade Guide

A SYSAUX tablespace is created upon upgrade to 10g The SYSAUX

tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain

MANAGEABILITY

Database performance statistics are now collected by the

Automatic Workload Repository (AWR) database component

automatically upon upgrade to 10g and also for newly created 10g

databases This data is stored in the SYSAUX tablespace and is

used by the database for automatic generation of performance

recommendations See Chapter 5 ldquoAutomatic Performance

Statisticsrdquo in the Oracle Database Performance Tuning Guide

If you currently use Statspack for performance data gathering

see section 1 of the Statspack readme (spdoctxt in the RDBMS

ADMIN directory) for directions on using Statspack in 10g to

avoid conflict with the AWR

MEMORY

Automatic PGA Memory Management is now enabled by default

(unless PGA_AGGREGATE_TARGET is explicitly set to 0 or

WORKAREA_SIZE_POLICY is explicitly set to MANUAL)

PGA_AGGREGATE_TARGET is defaulted to 20 of the SGA size unless

explicitly set Oracle recommends tuning the value of

PGA_AGGREGATE_TARGET after upgrading See Chapter 14 of the

Oracle Database Performance Tuning Guide

Previously the number of SQL cursors cached by PLSQL was

determined by OPEN_CURSORS In 10g the number of cursors cached

is determined by SESSION_CACHED_CURSORS See the Oracle Database

Reference manual

SHARED_POOL_SIZE must increase to include the space needed for

shared pool overhead

The default value of DB_BLOCK_SIZE is operating system

specific but is typically 8KB (was typically 2KB in previous

releases)

TRANSACTIONSPACE

Dropped objects are now moved to the recycle bin where the

space is only reused when it is needed This allows lsquoundroppingrsquo

a table using the FLASHBACK DROP feature See Chapter 14 of the

Oracle Database Administratorrsquos Guide

Auto tuning undo retention is on by default For more

information see Chapter 10 ldquoManaging the Undo Tablespacerdquo in

the Oracle Database Administratorrsquos Guide

CREATE DATABASE

In addition to the SYSTEM tablespace a SYSAUX tablespace is

always created at database creation and upon upgrade to 10g The

SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain See Chapter 2

ldquoCreating a Databaserdquo in the Oracle Database Administratorrsquos

Guide

In 10g by default all new databases are created with 10g file

format compatibility This means you can immediately use all the

10g features Once a database uses 10g compatible file formats

it is not possible to downgrade this database to prior releases

Minimum and default logfile sizes are larger Minimum is now 4

MB default is 50MB unless you are using Oracle Managed Files

(OMF) when it is 100 MB

PLSQL procedure successfully completed

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination Coracleoradatatestarchive

Oldest online log sequence 91

Next log sequence to archive 93

Current log sequence 93

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Backup complete database (Cold backup)

Step 2

Check the space needed and stop the listner and delete the sid

CDocuments and SettingsAdministratorgtset oracle_sid=test

CDocuments and SettingsAdministratorgtsqlplus nolog

SQLPlus Release 92010 ndash Production on Sat Aug 22 213652 2009

Copyright (c) 1982 2002 Oracle Corporation All rights reserved

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

Database mounted

Database opened

SQLgt desc sm$ts_avail

Name Null Type

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashndash mdashmdashmdashmdashmdashmdashmdashmdashmdash-

TABLESPACE_NAME VARCHAR2(30)

BYTES NUMBER

SQLgt select from sm$ts_avail

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 20971520

DRSYS 20971520

EXAMPLE 155975680

INDX 26214400

ODM 20971520

SYSTEM 419430400

TOOLS 10485760

UNDOTBS1 209715200

USERS 26214400

XDB 39976960

10 rows selected

SQLgt select from sm$ts_used

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 9764864

DRSYS 10092544

EXAMPLE 155779072

ODM 9699328

SYSTEM 414908416

TOOLS 6291456

UNDOTBS1 9814016

XDB 39714816

8 rows selected

SQLgt select from sm$ts_free

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 11141120

DRSYS 10813440

EXAMPLE 131072

INDX 26148864

ODM 11206656

SYSTEM 4456448

TOOLS 4128768

UNDOTBS1 199753728

USERS 26148864

XDB 196608

10 rows selected

SQLgt ho LSNRCTL

LSNRCTLgt start

Starting tnslsnr please waithellip

Failed to open service ltOracleoracleTNSListenergt error 1060

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220000

Uptime 0 days 0 hr 0 min 16 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 3: Manual Database Up Gradation From 9

Tablespaces [make adjustments in the current environment]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

ndashgt SYSTEM tablespace is adequate for the upgrade

hellip owner SYS

hellip minimum required size 577 MB

ndashgt CWMLITE tablespace is adequate for the upgrade

hellip owner OLAPSYS

hellip minimum required size 9 MB

ndashgt DRSYS tablespace is adequate for the upgrade

hellip owner CTXSYS

hellip minimum required size 10 MB

ndashgt ODM tablespace is adequate for the upgrade

hellip owner ODM

hellip minimum required size 9 MB

ndashgt XDB tablespace is adequate for the upgrade

hellip owner XDB

hellip minimum required size 48 MB

Options [present in existing database]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash

ndashgt Partitioning

ndashgt Spatial

ndashgt OLAP

ndashgt Oracle Data Mining

WARNING Listed option(s) must be installed with Oracle Database 101

Update Parameters [Update Oracle Database 101 initora or spfile]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

WARNING ndashgt ldquoshared_pool_sizerdquo needs to be increased to at least ldquo150944944Prime

ndashgt ldquopga_aggregate_targetrdquo is already at ldquo25165824Prime calculated new value is

ldquo25165824Prime

ndashgt ldquolarge_pool_sizerdquo is already at ldquo8388608Prime calculated new value is ldquo8388608Prime

WARNING ndashgt ldquojava_pool_sizerdquo needs to be increased to at least ldquo50331648Prime

Deprecated Parameters [Update Oracle Database 101 initora or spfile]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndash No deprecated parameters found No changes are required

Obsolete Parameters [Update Oracle Database 101 initora or spfile]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash

ndashgt ldquohash_join_enabledrdquo

ndashgt ldquolog_archive_startrdquo

Components [The following database components will be upgraded or installed]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndashgt Oracle Catalog Views [upgrade] VALID

ndashgt Oracle Packages and Types [upgrade] VALID

ndashgt JServer JAVA Virtual Machine [upgrade] VALID

hellipThe lsquoJServer JAVA Virtual Machinersquo JAccelerator (NCOMP)

hellipis required to be installed from the 10g Companion CD

hellip

ndashgt Oracle XDK for Java [upgrade] VALID

ndashgt Oracle Java Packages [upgrade] VALID

ndashgt Oracle XML Database [upgrade] VALID

ndashgt Oracle Workspace Manager [upgrade] VALID

ndashgt Oracle Data Mining [upgrade]

ndashgt OLAP Analytic Workspace [upgrade]

ndashgt OLAP Catalog [upgrade]

ndashgt Oracle OLAP API [upgrade]

ndashgt Oracle interMedia [upgrade]

hellipThe lsquoOracle interMedia Image Acceleratorrsquo is

helliprequired to be installed from the 10g Companion CD

hellip

ndashgt Spatial [upgrade]

ndashgt Oracle Text [upgrade] VALID

ndashgt Oracle Ultra Search [upgrade] VALID

SYSAUX Tablespace [Create tablespace in Oracle Database 101 environment]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndashgt New ldquoSYSAUXrdquo tablespace

hellip minimum required size for database upgrade 500 MB

Please create the new SYSAUX Tablespace AFTER the Oracle Database

101 server is started and BEFORE you invoke the upgrade script

Oracle Database 10g Changes in Default Behavior

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash

This page describes some of the changes in the behavior of Oracle

Database 10g from that of previous releases In some cases the

default values of some parameters have changed In other cases

new behaviorsrequirements have been introduced that may affect

current scripts or applications More detailed information is in

the documentation

SQL OPTIMIZER

The Cost Based Optimizer (CBO) is now enabled by default

Rule-based optimization is not supported in 10g (setting

OPTIMIZER_MODE to RULE or CHOOSE is not supported) See Chapter

12 ldquoIntroduction to the Optimizerrdquo in Oracle Database

Performance Tuning Guide

Collection of optimizer statistics is now performed by default

automatically for all schemas (including SYS) for pre-existing

databases upgraded to 10g and for newly created 10g databases

Gathering optimizer statistics on stale objects is scheduled by

default to occur daily during the maintenance window See

Chapter 15 ldquoManaging Optimizer Statisticsrdquo in Oracle Performance

Tuning Guide

See the Oracle Database Upgrade Guide for changes in behavior

for the COMPUTE STATISTICS clause of CREATE INDEX and for

behavior changes in SKIP_UNUSABLE_INDEXES

UPGRADEDOWNGRADE

After upgrading to 10g the minimum supported release to

downgrade to is Oracle 9i R2 release 9203 (or later) and the

minimum value for COMPATIBLE is 920 The only supported

downgrade path is for those users who have kept COMPATIBLE=920

and have an installed 9i R2 (release 9203 or later)

executable Users upgrading to 10g from prior releases (such as

Oracle 8 Oracle 8i or 9iR1) cannot downgrade to 9i R2 unless

they first install 9i R2 When upgrading to10g by default the

database will remain at 9i R2 file format compatibility so the

on disk structures that 10g writes are compatible with 9i R2

structures this makes it possible to downgrade to 9i R2 Once

file format compatibility has been explicitly advanced to 10g

(using COMPATIBLE=10xx) it is no longer possible to downgrade

See the Oracle Database Upgrade Guide

A SYSAUX tablespace is created upon upgrade to 10g The SYSAUX

tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain

MANAGEABILITY

Database performance statistics are now collected by the

Automatic Workload Repository (AWR) database component

automatically upon upgrade to 10g and also for newly created 10g

databases This data is stored in the SYSAUX tablespace and is

used by the database for automatic generation of performance

recommendations See Chapter 5 ldquoAutomatic Performance

Statisticsrdquo in the Oracle Database Performance Tuning Guide

If you currently use Statspack for performance data gathering

see section 1 of the Statspack readme (spdoctxt in the RDBMS

ADMIN directory) for directions on using Statspack in 10g to

avoid conflict with the AWR

MEMORY

Automatic PGA Memory Management is now enabled by default

(unless PGA_AGGREGATE_TARGET is explicitly set to 0 or

WORKAREA_SIZE_POLICY is explicitly set to MANUAL)

PGA_AGGREGATE_TARGET is defaulted to 20 of the SGA size unless

explicitly set Oracle recommends tuning the value of

PGA_AGGREGATE_TARGET after upgrading See Chapter 14 of the

Oracle Database Performance Tuning Guide

Previously the number of SQL cursors cached by PLSQL was

determined by OPEN_CURSORS In 10g the number of cursors cached

is determined by SESSION_CACHED_CURSORS See the Oracle Database

Reference manual

SHARED_POOL_SIZE must increase to include the space needed for

shared pool overhead

The default value of DB_BLOCK_SIZE is operating system

specific but is typically 8KB (was typically 2KB in previous

releases)

TRANSACTIONSPACE

Dropped objects are now moved to the recycle bin where the

space is only reused when it is needed This allows lsquoundroppingrsquo

a table using the FLASHBACK DROP feature See Chapter 14 of the

Oracle Database Administratorrsquos Guide

Auto tuning undo retention is on by default For more

information see Chapter 10 ldquoManaging the Undo Tablespacerdquo in

the Oracle Database Administratorrsquos Guide

CREATE DATABASE

In addition to the SYSTEM tablespace a SYSAUX tablespace is

always created at database creation and upon upgrade to 10g The

SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain See Chapter 2

ldquoCreating a Databaserdquo in the Oracle Database Administratorrsquos

Guide

In 10g by default all new databases are created with 10g file

format compatibility This means you can immediately use all the

10g features Once a database uses 10g compatible file formats

it is not possible to downgrade this database to prior releases

Minimum and default logfile sizes are larger Minimum is now 4

MB default is 50MB unless you are using Oracle Managed Files

(OMF) when it is 100 MB

PLSQL procedure successfully completed

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination Coracleoradatatestarchive

Oldest online log sequence 91

Next log sequence to archive 93

Current log sequence 93

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Backup complete database (Cold backup)

Step 2

Check the space needed and stop the listner and delete the sid

CDocuments and SettingsAdministratorgtset oracle_sid=test

CDocuments and SettingsAdministratorgtsqlplus nolog

SQLPlus Release 92010 ndash Production on Sat Aug 22 213652 2009

Copyright (c) 1982 2002 Oracle Corporation All rights reserved

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

Database mounted

Database opened

SQLgt desc sm$ts_avail

Name Null Type

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashndash mdashmdashmdashmdashmdashmdashmdashmdashmdash-

TABLESPACE_NAME VARCHAR2(30)

BYTES NUMBER

SQLgt select from sm$ts_avail

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 20971520

DRSYS 20971520

EXAMPLE 155975680

INDX 26214400

ODM 20971520

SYSTEM 419430400

TOOLS 10485760

UNDOTBS1 209715200

USERS 26214400

XDB 39976960

10 rows selected

SQLgt select from sm$ts_used

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 9764864

DRSYS 10092544

EXAMPLE 155779072

ODM 9699328

SYSTEM 414908416

TOOLS 6291456

UNDOTBS1 9814016

XDB 39714816

8 rows selected

SQLgt select from sm$ts_free

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 11141120

DRSYS 10813440

EXAMPLE 131072

INDX 26148864

ODM 11206656

SYSTEM 4456448

TOOLS 4128768

UNDOTBS1 199753728

USERS 26148864

XDB 196608

10 rows selected

SQLgt ho LSNRCTL

LSNRCTLgt start

Starting tnslsnr please waithellip

Failed to open service ltOracleoracleTNSListenergt error 1060

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220000

Uptime 0 days 0 hr 0 min 16 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 4: Manual Database Up Gradation From 9

ndashgt OLAP

ndashgt Oracle Data Mining

WARNING Listed option(s) must be installed with Oracle Database 101

Update Parameters [Update Oracle Database 101 initora or spfile]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

WARNING ndashgt ldquoshared_pool_sizerdquo needs to be increased to at least ldquo150944944Prime

ndashgt ldquopga_aggregate_targetrdquo is already at ldquo25165824Prime calculated new value is

ldquo25165824Prime

ndashgt ldquolarge_pool_sizerdquo is already at ldquo8388608Prime calculated new value is ldquo8388608Prime

WARNING ndashgt ldquojava_pool_sizerdquo needs to be increased to at least ldquo50331648Prime

Deprecated Parameters [Update Oracle Database 101 initora or spfile]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndash No deprecated parameters found No changes are required

Obsolete Parameters [Update Oracle Database 101 initora or spfile]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash

ndashgt ldquohash_join_enabledrdquo

ndashgt ldquolog_archive_startrdquo

Components [The following database components will be upgraded or installed]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndashgt Oracle Catalog Views [upgrade] VALID

ndashgt Oracle Packages and Types [upgrade] VALID

ndashgt JServer JAVA Virtual Machine [upgrade] VALID

hellipThe lsquoJServer JAVA Virtual Machinersquo JAccelerator (NCOMP)

hellipis required to be installed from the 10g Companion CD

hellip

ndashgt Oracle XDK for Java [upgrade] VALID

ndashgt Oracle Java Packages [upgrade] VALID

ndashgt Oracle XML Database [upgrade] VALID

ndashgt Oracle Workspace Manager [upgrade] VALID

ndashgt Oracle Data Mining [upgrade]

ndashgt OLAP Analytic Workspace [upgrade]

ndashgt OLAP Catalog [upgrade]

ndashgt Oracle OLAP API [upgrade]

ndashgt Oracle interMedia [upgrade]

hellipThe lsquoOracle interMedia Image Acceleratorrsquo is

helliprequired to be installed from the 10g Companion CD

hellip

ndashgt Spatial [upgrade]

ndashgt Oracle Text [upgrade] VALID

ndashgt Oracle Ultra Search [upgrade] VALID

SYSAUX Tablespace [Create tablespace in Oracle Database 101 environment]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndashgt New ldquoSYSAUXrdquo tablespace

hellip minimum required size for database upgrade 500 MB

Please create the new SYSAUX Tablespace AFTER the Oracle Database

101 server is started and BEFORE you invoke the upgrade script

Oracle Database 10g Changes in Default Behavior

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash

This page describes some of the changes in the behavior of Oracle

Database 10g from that of previous releases In some cases the

default values of some parameters have changed In other cases

new behaviorsrequirements have been introduced that may affect

current scripts or applications More detailed information is in

the documentation

SQL OPTIMIZER

The Cost Based Optimizer (CBO) is now enabled by default

Rule-based optimization is not supported in 10g (setting

OPTIMIZER_MODE to RULE or CHOOSE is not supported) See Chapter

12 ldquoIntroduction to the Optimizerrdquo in Oracle Database

Performance Tuning Guide

Collection of optimizer statistics is now performed by default

automatically for all schemas (including SYS) for pre-existing

databases upgraded to 10g and for newly created 10g databases

Gathering optimizer statistics on stale objects is scheduled by

default to occur daily during the maintenance window See

Chapter 15 ldquoManaging Optimizer Statisticsrdquo in Oracle Performance

Tuning Guide

See the Oracle Database Upgrade Guide for changes in behavior

for the COMPUTE STATISTICS clause of CREATE INDEX and for

behavior changes in SKIP_UNUSABLE_INDEXES

UPGRADEDOWNGRADE

After upgrading to 10g the minimum supported release to

downgrade to is Oracle 9i R2 release 9203 (or later) and the

minimum value for COMPATIBLE is 920 The only supported

downgrade path is for those users who have kept COMPATIBLE=920

and have an installed 9i R2 (release 9203 or later)

executable Users upgrading to 10g from prior releases (such as

Oracle 8 Oracle 8i or 9iR1) cannot downgrade to 9i R2 unless

they first install 9i R2 When upgrading to10g by default the

database will remain at 9i R2 file format compatibility so the

on disk structures that 10g writes are compatible with 9i R2

structures this makes it possible to downgrade to 9i R2 Once

file format compatibility has been explicitly advanced to 10g

(using COMPATIBLE=10xx) it is no longer possible to downgrade

See the Oracle Database Upgrade Guide

A SYSAUX tablespace is created upon upgrade to 10g The SYSAUX

tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain

MANAGEABILITY

Database performance statistics are now collected by the

Automatic Workload Repository (AWR) database component

automatically upon upgrade to 10g and also for newly created 10g

databases This data is stored in the SYSAUX tablespace and is

used by the database for automatic generation of performance

recommendations See Chapter 5 ldquoAutomatic Performance

Statisticsrdquo in the Oracle Database Performance Tuning Guide

If you currently use Statspack for performance data gathering

see section 1 of the Statspack readme (spdoctxt in the RDBMS

ADMIN directory) for directions on using Statspack in 10g to

avoid conflict with the AWR

MEMORY

Automatic PGA Memory Management is now enabled by default

(unless PGA_AGGREGATE_TARGET is explicitly set to 0 or

WORKAREA_SIZE_POLICY is explicitly set to MANUAL)

PGA_AGGREGATE_TARGET is defaulted to 20 of the SGA size unless

explicitly set Oracle recommends tuning the value of

PGA_AGGREGATE_TARGET after upgrading See Chapter 14 of the

Oracle Database Performance Tuning Guide

Previously the number of SQL cursors cached by PLSQL was

determined by OPEN_CURSORS In 10g the number of cursors cached

is determined by SESSION_CACHED_CURSORS See the Oracle Database

Reference manual

SHARED_POOL_SIZE must increase to include the space needed for

shared pool overhead

The default value of DB_BLOCK_SIZE is operating system

specific but is typically 8KB (was typically 2KB in previous

releases)

TRANSACTIONSPACE

Dropped objects are now moved to the recycle bin where the

space is only reused when it is needed This allows lsquoundroppingrsquo

a table using the FLASHBACK DROP feature See Chapter 14 of the

Oracle Database Administratorrsquos Guide

Auto tuning undo retention is on by default For more

information see Chapter 10 ldquoManaging the Undo Tablespacerdquo in

the Oracle Database Administratorrsquos Guide

CREATE DATABASE

In addition to the SYSTEM tablespace a SYSAUX tablespace is

always created at database creation and upon upgrade to 10g The

SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain See Chapter 2

ldquoCreating a Databaserdquo in the Oracle Database Administratorrsquos

Guide

In 10g by default all new databases are created with 10g file

format compatibility This means you can immediately use all the

10g features Once a database uses 10g compatible file formats

it is not possible to downgrade this database to prior releases

Minimum and default logfile sizes are larger Minimum is now 4

MB default is 50MB unless you are using Oracle Managed Files

(OMF) when it is 100 MB

PLSQL procedure successfully completed

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination Coracleoradatatestarchive

Oldest online log sequence 91

Next log sequence to archive 93

Current log sequence 93

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Backup complete database (Cold backup)

Step 2

Check the space needed and stop the listner and delete the sid

CDocuments and SettingsAdministratorgtset oracle_sid=test

CDocuments and SettingsAdministratorgtsqlplus nolog

SQLPlus Release 92010 ndash Production on Sat Aug 22 213652 2009

Copyright (c) 1982 2002 Oracle Corporation All rights reserved

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

Database mounted

Database opened

SQLgt desc sm$ts_avail

Name Null Type

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashndash mdashmdashmdashmdashmdashmdashmdashmdashmdash-

TABLESPACE_NAME VARCHAR2(30)

BYTES NUMBER

SQLgt select from sm$ts_avail

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 20971520

DRSYS 20971520

EXAMPLE 155975680

INDX 26214400

ODM 20971520

SYSTEM 419430400

TOOLS 10485760

UNDOTBS1 209715200

USERS 26214400

XDB 39976960

10 rows selected

SQLgt select from sm$ts_used

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 9764864

DRSYS 10092544

EXAMPLE 155779072

ODM 9699328

SYSTEM 414908416

TOOLS 6291456

UNDOTBS1 9814016

XDB 39714816

8 rows selected

SQLgt select from sm$ts_free

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 11141120

DRSYS 10813440

EXAMPLE 131072

INDX 26148864

ODM 11206656

SYSTEM 4456448

TOOLS 4128768

UNDOTBS1 199753728

USERS 26148864

XDB 196608

10 rows selected

SQLgt ho LSNRCTL

LSNRCTLgt start

Starting tnslsnr please waithellip

Failed to open service ltOracleoracleTNSListenergt error 1060

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220000

Uptime 0 days 0 hr 0 min 16 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 5: Manual Database Up Gradation From 9

Components [The following database components will be upgraded or installed]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndashgt Oracle Catalog Views [upgrade] VALID

ndashgt Oracle Packages and Types [upgrade] VALID

ndashgt JServer JAVA Virtual Machine [upgrade] VALID

hellipThe lsquoJServer JAVA Virtual Machinersquo JAccelerator (NCOMP)

hellipis required to be installed from the 10g Companion CD

hellip

ndashgt Oracle XDK for Java [upgrade] VALID

ndashgt Oracle Java Packages [upgrade] VALID

ndashgt Oracle XML Database [upgrade] VALID

ndashgt Oracle Workspace Manager [upgrade] VALID

ndashgt Oracle Data Mining [upgrade]

ndashgt OLAP Analytic Workspace [upgrade]

ndashgt OLAP Catalog [upgrade]

ndashgt Oracle OLAP API [upgrade]

ndashgt Oracle interMedia [upgrade]

hellipThe lsquoOracle interMedia Image Acceleratorrsquo is

helliprequired to be installed from the 10g Companion CD

hellip

ndashgt Spatial [upgrade]

ndashgt Oracle Text [upgrade] VALID

ndashgt Oracle Ultra Search [upgrade] VALID

SYSAUX Tablespace [Create tablespace in Oracle Database 101 environment]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndashgt New ldquoSYSAUXrdquo tablespace

hellip minimum required size for database upgrade 500 MB

Please create the new SYSAUX Tablespace AFTER the Oracle Database

101 server is started and BEFORE you invoke the upgrade script

Oracle Database 10g Changes in Default Behavior

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash

This page describes some of the changes in the behavior of Oracle

Database 10g from that of previous releases In some cases the

default values of some parameters have changed In other cases

new behaviorsrequirements have been introduced that may affect

current scripts or applications More detailed information is in

the documentation

SQL OPTIMIZER

The Cost Based Optimizer (CBO) is now enabled by default

Rule-based optimization is not supported in 10g (setting

OPTIMIZER_MODE to RULE or CHOOSE is not supported) See Chapter

12 ldquoIntroduction to the Optimizerrdquo in Oracle Database

Performance Tuning Guide

Collection of optimizer statistics is now performed by default

automatically for all schemas (including SYS) for pre-existing

databases upgraded to 10g and for newly created 10g databases

Gathering optimizer statistics on stale objects is scheduled by

default to occur daily during the maintenance window See

Chapter 15 ldquoManaging Optimizer Statisticsrdquo in Oracle Performance

Tuning Guide

See the Oracle Database Upgrade Guide for changes in behavior

for the COMPUTE STATISTICS clause of CREATE INDEX and for

behavior changes in SKIP_UNUSABLE_INDEXES

UPGRADEDOWNGRADE

After upgrading to 10g the minimum supported release to

downgrade to is Oracle 9i R2 release 9203 (or later) and the

minimum value for COMPATIBLE is 920 The only supported

downgrade path is for those users who have kept COMPATIBLE=920

and have an installed 9i R2 (release 9203 or later)

executable Users upgrading to 10g from prior releases (such as

Oracle 8 Oracle 8i or 9iR1) cannot downgrade to 9i R2 unless

they first install 9i R2 When upgrading to10g by default the

database will remain at 9i R2 file format compatibility so the

on disk structures that 10g writes are compatible with 9i R2

structures this makes it possible to downgrade to 9i R2 Once

file format compatibility has been explicitly advanced to 10g

(using COMPATIBLE=10xx) it is no longer possible to downgrade

See the Oracle Database Upgrade Guide

A SYSAUX tablespace is created upon upgrade to 10g The SYSAUX

tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain

MANAGEABILITY

Database performance statistics are now collected by the

Automatic Workload Repository (AWR) database component

automatically upon upgrade to 10g and also for newly created 10g

databases This data is stored in the SYSAUX tablespace and is

used by the database for automatic generation of performance

recommendations See Chapter 5 ldquoAutomatic Performance

Statisticsrdquo in the Oracle Database Performance Tuning Guide

If you currently use Statspack for performance data gathering

see section 1 of the Statspack readme (spdoctxt in the RDBMS

ADMIN directory) for directions on using Statspack in 10g to

avoid conflict with the AWR

MEMORY

Automatic PGA Memory Management is now enabled by default

(unless PGA_AGGREGATE_TARGET is explicitly set to 0 or

WORKAREA_SIZE_POLICY is explicitly set to MANUAL)

PGA_AGGREGATE_TARGET is defaulted to 20 of the SGA size unless

explicitly set Oracle recommends tuning the value of

PGA_AGGREGATE_TARGET after upgrading See Chapter 14 of the

Oracle Database Performance Tuning Guide

Previously the number of SQL cursors cached by PLSQL was

determined by OPEN_CURSORS In 10g the number of cursors cached

is determined by SESSION_CACHED_CURSORS See the Oracle Database

Reference manual

SHARED_POOL_SIZE must increase to include the space needed for

shared pool overhead

The default value of DB_BLOCK_SIZE is operating system

specific but is typically 8KB (was typically 2KB in previous

releases)

TRANSACTIONSPACE

Dropped objects are now moved to the recycle bin where the

space is only reused when it is needed This allows lsquoundroppingrsquo

a table using the FLASHBACK DROP feature See Chapter 14 of the

Oracle Database Administratorrsquos Guide

Auto tuning undo retention is on by default For more

information see Chapter 10 ldquoManaging the Undo Tablespacerdquo in

the Oracle Database Administratorrsquos Guide

CREATE DATABASE

In addition to the SYSTEM tablespace a SYSAUX tablespace is

always created at database creation and upon upgrade to 10g The

SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain See Chapter 2

ldquoCreating a Databaserdquo in the Oracle Database Administratorrsquos

Guide

In 10g by default all new databases are created with 10g file

format compatibility This means you can immediately use all the

10g features Once a database uses 10g compatible file formats

it is not possible to downgrade this database to prior releases

Minimum and default logfile sizes are larger Minimum is now 4

MB default is 50MB unless you are using Oracle Managed Files

(OMF) when it is 100 MB

PLSQL procedure successfully completed

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination Coracleoradatatestarchive

Oldest online log sequence 91

Next log sequence to archive 93

Current log sequence 93

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Backup complete database (Cold backup)

Step 2

Check the space needed and stop the listner and delete the sid

CDocuments and SettingsAdministratorgtset oracle_sid=test

CDocuments and SettingsAdministratorgtsqlplus nolog

SQLPlus Release 92010 ndash Production on Sat Aug 22 213652 2009

Copyright (c) 1982 2002 Oracle Corporation All rights reserved

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

Database mounted

Database opened

SQLgt desc sm$ts_avail

Name Null Type

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashndash mdashmdashmdashmdashmdashmdashmdashmdashmdash-

TABLESPACE_NAME VARCHAR2(30)

BYTES NUMBER

SQLgt select from sm$ts_avail

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 20971520

DRSYS 20971520

EXAMPLE 155975680

INDX 26214400

ODM 20971520

SYSTEM 419430400

TOOLS 10485760

UNDOTBS1 209715200

USERS 26214400

XDB 39976960

10 rows selected

SQLgt select from sm$ts_used

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 9764864

DRSYS 10092544

EXAMPLE 155779072

ODM 9699328

SYSTEM 414908416

TOOLS 6291456

UNDOTBS1 9814016

XDB 39714816

8 rows selected

SQLgt select from sm$ts_free

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 11141120

DRSYS 10813440

EXAMPLE 131072

INDX 26148864

ODM 11206656

SYSTEM 4456448

TOOLS 4128768

UNDOTBS1 199753728

USERS 26148864

XDB 196608

10 rows selected

SQLgt ho LSNRCTL

LSNRCTLgt start

Starting tnslsnr please waithellip

Failed to open service ltOracleoracleTNSListenergt error 1060

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220000

Uptime 0 days 0 hr 0 min 16 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 6: Manual Database Up Gradation From 9

SYSAUX Tablespace [Create tablespace in Oracle Database 101 environment]

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

ndashgt New ldquoSYSAUXrdquo tablespace

hellip minimum required size for database upgrade 500 MB

Please create the new SYSAUX Tablespace AFTER the Oracle Database

101 server is started and BEFORE you invoke the upgrade script

Oracle Database 10g Changes in Default Behavior

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash

This page describes some of the changes in the behavior of Oracle

Database 10g from that of previous releases In some cases the

default values of some parameters have changed In other cases

new behaviorsrequirements have been introduced that may affect

current scripts or applications More detailed information is in

the documentation

SQL OPTIMIZER

The Cost Based Optimizer (CBO) is now enabled by default

Rule-based optimization is not supported in 10g (setting

OPTIMIZER_MODE to RULE or CHOOSE is not supported) See Chapter

12 ldquoIntroduction to the Optimizerrdquo in Oracle Database

Performance Tuning Guide

Collection of optimizer statistics is now performed by default

automatically for all schemas (including SYS) for pre-existing

databases upgraded to 10g and for newly created 10g databases

Gathering optimizer statistics on stale objects is scheduled by

default to occur daily during the maintenance window See

Chapter 15 ldquoManaging Optimizer Statisticsrdquo in Oracle Performance

Tuning Guide

See the Oracle Database Upgrade Guide for changes in behavior

for the COMPUTE STATISTICS clause of CREATE INDEX and for

behavior changes in SKIP_UNUSABLE_INDEXES

UPGRADEDOWNGRADE

After upgrading to 10g the minimum supported release to

downgrade to is Oracle 9i R2 release 9203 (or later) and the

minimum value for COMPATIBLE is 920 The only supported

downgrade path is for those users who have kept COMPATIBLE=920

and have an installed 9i R2 (release 9203 or later)

executable Users upgrading to 10g from prior releases (such as

Oracle 8 Oracle 8i or 9iR1) cannot downgrade to 9i R2 unless

they first install 9i R2 When upgrading to10g by default the

database will remain at 9i R2 file format compatibility so the

on disk structures that 10g writes are compatible with 9i R2

structures this makes it possible to downgrade to 9i R2 Once

file format compatibility has been explicitly advanced to 10g

(using COMPATIBLE=10xx) it is no longer possible to downgrade

See the Oracle Database Upgrade Guide

A SYSAUX tablespace is created upon upgrade to 10g The SYSAUX

tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain

MANAGEABILITY

Database performance statistics are now collected by the

Automatic Workload Repository (AWR) database component

automatically upon upgrade to 10g and also for newly created 10g

databases This data is stored in the SYSAUX tablespace and is

used by the database for automatic generation of performance

recommendations See Chapter 5 ldquoAutomatic Performance

Statisticsrdquo in the Oracle Database Performance Tuning Guide

If you currently use Statspack for performance data gathering

see section 1 of the Statspack readme (spdoctxt in the RDBMS

ADMIN directory) for directions on using Statspack in 10g to

avoid conflict with the AWR

MEMORY

Automatic PGA Memory Management is now enabled by default

(unless PGA_AGGREGATE_TARGET is explicitly set to 0 or

WORKAREA_SIZE_POLICY is explicitly set to MANUAL)

PGA_AGGREGATE_TARGET is defaulted to 20 of the SGA size unless

explicitly set Oracle recommends tuning the value of

PGA_AGGREGATE_TARGET after upgrading See Chapter 14 of the

Oracle Database Performance Tuning Guide

Previously the number of SQL cursors cached by PLSQL was

determined by OPEN_CURSORS In 10g the number of cursors cached

is determined by SESSION_CACHED_CURSORS See the Oracle Database

Reference manual

SHARED_POOL_SIZE must increase to include the space needed for

shared pool overhead

The default value of DB_BLOCK_SIZE is operating system

specific but is typically 8KB (was typically 2KB in previous

releases)

TRANSACTIONSPACE

Dropped objects are now moved to the recycle bin where the

space is only reused when it is needed This allows lsquoundroppingrsquo

a table using the FLASHBACK DROP feature See Chapter 14 of the

Oracle Database Administratorrsquos Guide

Auto tuning undo retention is on by default For more

information see Chapter 10 ldquoManaging the Undo Tablespacerdquo in

the Oracle Database Administratorrsquos Guide

CREATE DATABASE

In addition to the SYSTEM tablespace a SYSAUX tablespace is

always created at database creation and upon upgrade to 10g The

SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain See Chapter 2

ldquoCreating a Databaserdquo in the Oracle Database Administratorrsquos

Guide

In 10g by default all new databases are created with 10g file

format compatibility This means you can immediately use all the

10g features Once a database uses 10g compatible file formats

it is not possible to downgrade this database to prior releases

Minimum and default logfile sizes are larger Minimum is now 4

MB default is 50MB unless you are using Oracle Managed Files

(OMF) when it is 100 MB

PLSQL procedure successfully completed

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination Coracleoradatatestarchive

Oldest online log sequence 91

Next log sequence to archive 93

Current log sequence 93

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Backup complete database (Cold backup)

Step 2

Check the space needed and stop the listner and delete the sid

CDocuments and SettingsAdministratorgtset oracle_sid=test

CDocuments and SettingsAdministratorgtsqlplus nolog

SQLPlus Release 92010 ndash Production on Sat Aug 22 213652 2009

Copyright (c) 1982 2002 Oracle Corporation All rights reserved

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

Database mounted

Database opened

SQLgt desc sm$ts_avail

Name Null Type

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashndash mdashmdashmdashmdashmdashmdashmdashmdashmdash-

TABLESPACE_NAME VARCHAR2(30)

BYTES NUMBER

SQLgt select from sm$ts_avail

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 20971520

DRSYS 20971520

EXAMPLE 155975680

INDX 26214400

ODM 20971520

SYSTEM 419430400

TOOLS 10485760

UNDOTBS1 209715200

USERS 26214400

XDB 39976960

10 rows selected

SQLgt select from sm$ts_used

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 9764864

DRSYS 10092544

EXAMPLE 155779072

ODM 9699328

SYSTEM 414908416

TOOLS 6291456

UNDOTBS1 9814016

XDB 39714816

8 rows selected

SQLgt select from sm$ts_free

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 11141120

DRSYS 10813440

EXAMPLE 131072

INDX 26148864

ODM 11206656

SYSTEM 4456448

TOOLS 4128768

UNDOTBS1 199753728

USERS 26148864

XDB 196608

10 rows selected

SQLgt ho LSNRCTL

LSNRCTLgt start

Starting tnslsnr please waithellip

Failed to open service ltOracleoracleTNSListenergt error 1060

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220000

Uptime 0 days 0 hr 0 min 16 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 7: Manual Database Up Gradation From 9

automatically for all schemas (including SYS) for pre-existing

databases upgraded to 10g and for newly created 10g databases

Gathering optimizer statistics on stale objects is scheduled by

default to occur daily during the maintenance window See

Chapter 15 ldquoManaging Optimizer Statisticsrdquo in Oracle Performance

Tuning Guide

See the Oracle Database Upgrade Guide for changes in behavior

for the COMPUTE STATISTICS clause of CREATE INDEX and for

behavior changes in SKIP_UNUSABLE_INDEXES

UPGRADEDOWNGRADE

After upgrading to 10g the minimum supported release to

downgrade to is Oracle 9i R2 release 9203 (or later) and the

minimum value for COMPATIBLE is 920 The only supported

downgrade path is for those users who have kept COMPATIBLE=920

and have an installed 9i R2 (release 9203 or later)

executable Users upgrading to 10g from prior releases (such as

Oracle 8 Oracle 8i or 9iR1) cannot downgrade to 9i R2 unless

they first install 9i R2 When upgrading to10g by default the

database will remain at 9i R2 file format compatibility so the

on disk structures that 10g writes are compatible with 9i R2

structures this makes it possible to downgrade to 9i R2 Once

file format compatibility has been explicitly advanced to 10g

(using COMPATIBLE=10xx) it is no longer possible to downgrade

See the Oracle Database Upgrade Guide

A SYSAUX tablespace is created upon upgrade to 10g The SYSAUX

tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain

MANAGEABILITY

Database performance statistics are now collected by the

Automatic Workload Repository (AWR) database component

automatically upon upgrade to 10g and also for newly created 10g

databases This data is stored in the SYSAUX tablespace and is

used by the database for automatic generation of performance

recommendations See Chapter 5 ldquoAutomatic Performance

Statisticsrdquo in the Oracle Database Performance Tuning Guide

If you currently use Statspack for performance data gathering

see section 1 of the Statspack readme (spdoctxt in the RDBMS

ADMIN directory) for directions on using Statspack in 10g to

avoid conflict with the AWR

MEMORY

Automatic PGA Memory Management is now enabled by default

(unless PGA_AGGREGATE_TARGET is explicitly set to 0 or

WORKAREA_SIZE_POLICY is explicitly set to MANUAL)

PGA_AGGREGATE_TARGET is defaulted to 20 of the SGA size unless

explicitly set Oracle recommends tuning the value of

PGA_AGGREGATE_TARGET after upgrading See Chapter 14 of the

Oracle Database Performance Tuning Guide

Previously the number of SQL cursors cached by PLSQL was

determined by OPEN_CURSORS In 10g the number of cursors cached

is determined by SESSION_CACHED_CURSORS See the Oracle Database

Reference manual

SHARED_POOL_SIZE must increase to include the space needed for

shared pool overhead

The default value of DB_BLOCK_SIZE is operating system

specific but is typically 8KB (was typically 2KB in previous

releases)

TRANSACTIONSPACE

Dropped objects are now moved to the recycle bin where the

space is only reused when it is needed This allows lsquoundroppingrsquo

a table using the FLASHBACK DROP feature See Chapter 14 of the

Oracle Database Administratorrsquos Guide

Auto tuning undo retention is on by default For more

information see Chapter 10 ldquoManaging the Undo Tablespacerdquo in

the Oracle Database Administratorrsquos Guide

CREATE DATABASE

In addition to the SYSTEM tablespace a SYSAUX tablespace is

always created at database creation and upon upgrade to 10g The

SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain See Chapter 2

ldquoCreating a Databaserdquo in the Oracle Database Administratorrsquos

Guide

In 10g by default all new databases are created with 10g file

format compatibility This means you can immediately use all the

10g features Once a database uses 10g compatible file formats

it is not possible to downgrade this database to prior releases

Minimum and default logfile sizes are larger Minimum is now 4

MB default is 50MB unless you are using Oracle Managed Files

(OMF) when it is 100 MB

PLSQL procedure successfully completed

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination Coracleoradatatestarchive

Oldest online log sequence 91

Next log sequence to archive 93

Current log sequence 93

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Backup complete database (Cold backup)

Step 2

Check the space needed and stop the listner and delete the sid

CDocuments and SettingsAdministratorgtset oracle_sid=test

CDocuments and SettingsAdministratorgtsqlplus nolog

SQLPlus Release 92010 ndash Production on Sat Aug 22 213652 2009

Copyright (c) 1982 2002 Oracle Corporation All rights reserved

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

Database mounted

Database opened

SQLgt desc sm$ts_avail

Name Null Type

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashndash mdashmdashmdashmdashmdashmdashmdashmdashmdash-

TABLESPACE_NAME VARCHAR2(30)

BYTES NUMBER

SQLgt select from sm$ts_avail

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 20971520

DRSYS 20971520

EXAMPLE 155975680

INDX 26214400

ODM 20971520

SYSTEM 419430400

TOOLS 10485760

UNDOTBS1 209715200

USERS 26214400

XDB 39976960

10 rows selected

SQLgt select from sm$ts_used

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 9764864

DRSYS 10092544

EXAMPLE 155779072

ODM 9699328

SYSTEM 414908416

TOOLS 6291456

UNDOTBS1 9814016

XDB 39714816

8 rows selected

SQLgt select from sm$ts_free

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 11141120

DRSYS 10813440

EXAMPLE 131072

INDX 26148864

ODM 11206656

SYSTEM 4456448

TOOLS 4128768

UNDOTBS1 199753728

USERS 26148864

XDB 196608

10 rows selected

SQLgt ho LSNRCTL

LSNRCTLgt start

Starting tnslsnr please waithellip

Failed to open service ltOracleoracleTNSListenergt error 1060

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220000

Uptime 0 days 0 hr 0 min 16 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 8: Manual Database Up Gradation From 9

See the Oracle Database Upgrade Guide

A SYSAUX tablespace is created upon upgrade to 10g The SYSAUX

tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain

MANAGEABILITY

Database performance statistics are now collected by the

Automatic Workload Repository (AWR) database component

automatically upon upgrade to 10g and also for newly created 10g

databases This data is stored in the SYSAUX tablespace and is

used by the database for automatic generation of performance

recommendations See Chapter 5 ldquoAutomatic Performance

Statisticsrdquo in the Oracle Database Performance Tuning Guide

If you currently use Statspack for performance data gathering

see section 1 of the Statspack readme (spdoctxt in the RDBMS

ADMIN directory) for directions on using Statspack in 10g to

avoid conflict with the AWR

MEMORY

Automatic PGA Memory Management is now enabled by default

(unless PGA_AGGREGATE_TARGET is explicitly set to 0 or

WORKAREA_SIZE_POLICY is explicitly set to MANUAL)

PGA_AGGREGATE_TARGET is defaulted to 20 of the SGA size unless

explicitly set Oracle recommends tuning the value of

PGA_AGGREGATE_TARGET after upgrading See Chapter 14 of the

Oracle Database Performance Tuning Guide

Previously the number of SQL cursors cached by PLSQL was

determined by OPEN_CURSORS In 10g the number of cursors cached

is determined by SESSION_CACHED_CURSORS See the Oracle Database

Reference manual

SHARED_POOL_SIZE must increase to include the space needed for

shared pool overhead

The default value of DB_BLOCK_SIZE is operating system

specific but is typically 8KB (was typically 2KB in previous

releases)

TRANSACTIONSPACE

Dropped objects are now moved to the recycle bin where the

space is only reused when it is needed This allows lsquoundroppingrsquo

a table using the FLASHBACK DROP feature See Chapter 14 of the

Oracle Database Administratorrsquos Guide

Auto tuning undo retention is on by default For more

information see Chapter 10 ldquoManaging the Undo Tablespacerdquo in

the Oracle Database Administratorrsquos Guide

CREATE DATABASE

In addition to the SYSTEM tablespace a SYSAUX tablespace is

always created at database creation and upon upgrade to 10g The

SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain See Chapter 2

ldquoCreating a Databaserdquo in the Oracle Database Administratorrsquos

Guide

In 10g by default all new databases are created with 10g file

format compatibility This means you can immediately use all the

10g features Once a database uses 10g compatible file formats

it is not possible to downgrade this database to prior releases

Minimum and default logfile sizes are larger Minimum is now 4

MB default is 50MB unless you are using Oracle Managed Files

(OMF) when it is 100 MB

PLSQL procedure successfully completed

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination Coracleoradatatestarchive

Oldest online log sequence 91

Next log sequence to archive 93

Current log sequence 93

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Backup complete database (Cold backup)

Step 2

Check the space needed and stop the listner and delete the sid

CDocuments and SettingsAdministratorgtset oracle_sid=test

CDocuments and SettingsAdministratorgtsqlplus nolog

SQLPlus Release 92010 ndash Production on Sat Aug 22 213652 2009

Copyright (c) 1982 2002 Oracle Corporation All rights reserved

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

Database mounted

Database opened

SQLgt desc sm$ts_avail

Name Null Type

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashndash mdashmdashmdashmdashmdashmdashmdashmdashmdash-

TABLESPACE_NAME VARCHAR2(30)

BYTES NUMBER

SQLgt select from sm$ts_avail

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 20971520

DRSYS 20971520

EXAMPLE 155975680

INDX 26214400

ODM 20971520

SYSTEM 419430400

TOOLS 10485760

UNDOTBS1 209715200

USERS 26214400

XDB 39976960

10 rows selected

SQLgt select from sm$ts_used

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 9764864

DRSYS 10092544

EXAMPLE 155779072

ODM 9699328

SYSTEM 414908416

TOOLS 6291456

UNDOTBS1 9814016

XDB 39714816

8 rows selected

SQLgt select from sm$ts_free

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 11141120

DRSYS 10813440

EXAMPLE 131072

INDX 26148864

ODM 11206656

SYSTEM 4456448

TOOLS 4128768

UNDOTBS1 199753728

USERS 26148864

XDB 196608

10 rows selected

SQLgt ho LSNRCTL

LSNRCTLgt start

Starting tnslsnr please waithellip

Failed to open service ltOracleoracleTNSListenergt error 1060

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220000

Uptime 0 days 0 hr 0 min 16 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 9: Manual Database Up Gradation From 9

PGA_AGGREGATE_TARGET is defaulted to 20 of the SGA size unless

explicitly set Oracle recommends tuning the value of

PGA_AGGREGATE_TARGET after upgrading See Chapter 14 of the

Oracle Database Performance Tuning Guide

Previously the number of SQL cursors cached by PLSQL was

determined by OPEN_CURSORS In 10g the number of cursors cached

is determined by SESSION_CACHED_CURSORS See the Oracle Database

Reference manual

SHARED_POOL_SIZE must increase to include the space needed for

shared pool overhead

The default value of DB_BLOCK_SIZE is operating system

specific but is typically 8KB (was typically 2KB in previous

releases)

TRANSACTIONSPACE

Dropped objects are now moved to the recycle bin where the

space is only reused when it is needed This allows lsquoundroppingrsquo

a table using the FLASHBACK DROP feature See Chapter 14 of the

Oracle Database Administratorrsquos Guide

Auto tuning undo retention is on by default For more

information see Chapter 10 ldquoManaging the Undo Tablespacerdquo in

the Oracle Database Administratorrsquos Guide

CREATE DATABASE

In addition to the SYSTEM tablespace a SYSAUX tablespace is

always created at database creation and upon upgrade to 10g The

SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain See Chapter 2

ldquoCreating a Databaserdquo in the Oracle Database Administratorrsquos

Guide

In 10g by default all new databases are created with 10g file

format compatibility This means you can immediately use all the

10g features Once a database uses 10g compatible file formats

it is not possible to downgrade this database to prior releases

Minimum and default logfile sizes are larger Minimum is now 4

MB default is 50MB unless you are using Oracle Managed Files

(OMF) when it is 100 MB

PLSQL procedure successfully completed

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination Coracleoradatatestarchive

Oldest online log sequence 91

Next log sequence to archive 93

Current log sequence 93

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Backup complete database (Cold backup)

Step 2

Check the space needed and stop the listner and delete the sid

CDocuments and SettingsAdministratorgtset oracle_sid=test

CDocuments and SettingsAdministratorgtsqlplus nolog

SQLPlus Release 92010 ndash Production on Sat Aug 22 213652 2009

Copyright (c) 1982 2002 Oracle Corporation All rights reserved

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

Database mounted

Database opened

SQLgt desc sm$ts_avail

Name Null Type

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashndash mdashmdashmdashmdashmdashmdashmdashmdashmdash-

TABLESPACE_NAME VARCHAR2(30)

BYTES NUMBER

SQLgt select from sm$ts_avail

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 20971520

DRSYS 20971520

EXAMPLE 155975680

INDX 26214400

ODM 20971520

SYSTEM 419430400

TOOLS 10485760

UNDOTBS1 209715200

USERS 26214400

XDB 39976960

10 rows selected

SQLgt select from sm$ts_used

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 9764864

DRSYS 10092544

EXAMPLE 155779072

ODM 9699328

SYSTEM 414908416

TOOLS 6291456

UNDOTBS1 9814016

XDB 39714816

8 rows selected

SQLgt select from sm$ts_free

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 11141120

DRSYS 10813440

EXAMPLE 131072

INDX 26148864

ODM 11206656

SYSTEM 4456448

TOOLS 4128768

UNDOTBS1 199753728

USERS 26148864

XDB 196608

10 rows selected

SQLgt ho LSNRCTL

LSNRCTLgt start

Starting tnslsnr please waithellip

Failed to open service ltOracleoracleTNSListenergt error 1060

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220000

Uptime 0 days 0 hr 0 min 16 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 10: Manual Database Up Gradation From 9

always created at database creation and upon upgrade to 10g The

SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM

tablespace Because it is the default tablespace for many Oracle

features and products that previously required their own

tablespaces it reduces the number of tablespaces required by

Oracle that you as a DBA must maintain See Chapter 2

ldquoCreating a Databaserdquo in the Oracle Database Administratorrsquos

Guide

In 10g by default all new databases are created with 10g file

format compatibility This means you can immediately use all the

10g features Once a database uses 10g compatible file formats

it is not possible to downgrade this database to prior releases

Minimum and default logfile sizes are larger Minimum is now 4

MB default is 50MB unless you are using Oracle Managed Files

(OMF) when it is 100 MB

PLSQL procedure successfully completed

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination Coracleoradatatestarchive

Oldest online log sequence 91

Next log sequence to archive 93

Current log sequence 93

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Backup complete database (Cold backup)

Step 2

Check the space needed and stop the listner and delete the sid

CDocuments and SettingsAdministratorgtset oracle_sid=test

CDocuments and SettingsAdministratorgtsqlplus nolog

SQLPlus Release 92010 ndash Production on Sat Aug 22 213652 2009

Copyright (c) 1982 2002 Oracle Corporation All rights reserved

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

Database mounted

Database opened

SQLgt desc sm$ts_avail

Name Null Type

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashndash mdashmdashmdashmdashmdashmdashmdashmdashmdash-

TABLESPACE_NAME VARCHAR2(30)

BYTES NUMBER

SQLgt select from sm$ts_avail

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 20971520

DRSYS 20971520

EXAMPLE 155975680

INDX 26214400

ODM 20971520

SYSTEM 419430400

TOOLS 10485760

UNDOTBS1 209715200

USERS 26214400

XDB 39976960

10 rows selected

SQLgt select from sm$ts_used

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 9764864

DRSYS 10092544

EXAMPLE 155779072

ODM 9699328

SYSTEM 414908416

TOOLS 6291456

UNDOTBS1 9814016

XDB 39714816

8 rows selected

SQLgt select from sm$ts_free

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 11141120

DRSYS 10813440

EXAMPLE 131072

INDX 26148864

ODM 11206656

SYSTEM 4456448

TOOLS 4128768

UNDOTBS1 199753728

USERS 26148864

XDB 196608

10 rows selected

SQLgt ho LSNRCTL

LSNRCTLgt start

Starting tnslsnr please waithellip

Failed to open service ltOracleoracleTNSListenergt error 1060

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220000

Uptime 0 days 0 hr 0 min 16 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 11: Manual Database Up Gradation From 9

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Backup complete database (Cold backup)

Step 2

Check the space needed and stop the listner and delete the sid

CDocuments and SettingsAdministratorgtset oracle_sid=test

CDocuments and SettingsAdministratorgtsqlplus nolog

SQLPlus Release 92010 ndash Production on Sat Aug 22 213652 2009

Copyright (c) 1982 2002 Oracle Corporation All rights reserved

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

Database mounted

Database opened

SQLgt desc sm$ts_avail

Name Null Type

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashndash mdashmdashmdashmdashmdashmdashmdashmdashmdash-

TABLESPACE_NAME VARCHAR2(30)

BYTES NUMBER

SQLgt select from sm$ts_avail

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 20971520

DRSYS 20971520

EXAMPLE 155975680

INDX 26214400

ODM 20971520

SYSTEM 419430400

TOOLS 10485760

UNDOTBS1 209715200

USERS 26214400

XDB 39976960

10 rows selected

SQLgt select from sm$ts_used

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 9764864

DRSYS 10092544

EXAMPLE 155779072

ODM 9699328

SYSTEM 414908416

TOOLS 6291456

UNDOTBS1 9814016

XDB 39714816

8 rows selected

SQLgt select from sm$ts_free

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 11141120

DRSYS 10813440

EXAMPLE 131072

INDX 26148864

ODM 11206656

SYSTEM 4456448

TOOLS 4128768

UNDOTBS1 199753728

USERS 26148864

XDB 196608

10 rows selected

SQLgt ho LSNRCTL

LSNRCTLgt start

Starting tnslsnr please waithellip

Failed to open service ltOracleoracleTNSListenergt error 1060

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220000

Uptime 0 days 0 hr 0 min 16 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 12: Manual Database Up Gradation From 9

SQLgt desc sm$ts_avail

Name Null Type

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashndash mdashmdashmdashmdashmdashmdashmdashmdashmdash-

TABLESPACE_NAME VARCHAR2(30)

BYTES NUMBER

SQLgt select from sm$ts_avail

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 20971520

DRSYS 20971520

EXAMPLE 155975680

INDX 26214400

ODM 20971520

SYSTEM 419430400

TOOLS 10485760

UNDOTBS1 209715200

USERS 26214400

XDB 39976960

10 rows selected

SQLgt select from sm$ts_used

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 9764864

DRSYS 10092544

EXAMPLE 155779072

ODM 9699328

SYSTEM 414908416

TOOLS 6291456

UNDOTBS1 9814016

XDB 39714816

8 rows selected

SQLgt select from sm$ts_free

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 11141120

DRSYS 10813440

EXAMPLE 131072

INDX 26148864

ODM 11206656

SYSTEM 4456448

TOOLS 4128768

UNDOTBS1 199753728

USERS 26148864

XDB 196608

10 rows selected

SQLgt ho LSNRCTL

LSNRCTLgt start

Starting tnslsnr please waithellip

Failed to open service ltOracleoracleTNSListenergt error 1060

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220000

Uptime 0 days 0 hr 0 min 16 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 13: Manual Database Up Gradation From 9

DRSYS 10092544

EXAMPLE 155779072

ODM 9699328

SYSTEM 414908416

TOOLS 6291456

UNDOTBS1 9814016

XDB 39714816

8 rows selected

SQLgt select from sm$ts_free

TABLESPACE_NAME BYTES

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdash-

CWMLITE 11141120

DRSYS 10813440

EXAMPLE 131072

INDX 26148864

ODM 11206656

SYSTEM 4456448

TOOLS 4128768

UNDOTBS1 199753728

USERS 26148864

XDB 196608

10 rows selected

SQLgt ho LSNRCTL

LSNRCTLgt start

Starting tnslsnr please waithellip

Failed to open service ltOracleoracleTNSListenergt error 1060

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220000

Uptime 0 days 0 hr 0 min 16 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 14: Manual Database Up Gradation From 9

LSNRCTLgt start

Starting tnslsnr please waithellip

Failed to open service ltOracleoracleTNSListenergt error 1060

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220000

Uptime 0 days 0 hr 0 min 16 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 15: Manual Database Up Gradation From 9

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt stop

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

LSNRCTLgt start

Starting tnslsnr please waithellip

TNSLSNR for 32-bit Windows Version 92010 ndash Production

System parameter file is Coracleora92networkadminlistenerora

Log messages written to Coracleora92networkloglistenerlog

Listening on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

STATUS of the LISTENER

mdashmdashmdashmdashmdashmdashmdashmdash

Alias LISTENER

Version TNSLSNR for 32-bit Windows Version 92010 ndash Production

Start Date 22-AUG-2009 220048

Uptime 0 days 0 hr 0 min 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 16: Manual Database Up Gradation From 9

Listener Parameter File Coracleora92networkadminlistenerora

Listener Log File Coracleora92networkloglistenerlog

Listening Endpoints Summaryhellip

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521)))

Services Summaryhellip

Service ldquoTESTrdquo has 1 instance(s)

Instance ldquoTESTrdquo status UNKNOWN has 1 handler(s) for this servicehellip

The command completed successfully

LSNRCTLgt exit

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt exit

Disconnected from Oracle9i Enterprise Edition Release 92010 ndash Production

With the Partitioning OLAP and Oracle Data Mining options

JServer Release 92010 ndash Production

CDocuments and SettingsAdministratorgtlsnrctl stop

LSNRCTL for 32-bit Windows Version 92010 ndash Production on 22-AUG-2009 220314

copyright (c) 1991 2002 Oracle Corporation All rights reserved

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee-6e78e526295)(PORT=1521)))

The command completed successfully

CDocuments and SettingsAdministratorgtoradim -delete -sid test

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 17: Manual Database Up Gradation From 9

Step 3

Install ORACLE 10g Software in different Home

Starting the DB with 10g instance and upgradation Process

SQLgt startup pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime nomount

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

SQLgt create spfile from pfile=rsquoEoracleproduct1010admintestpfileinitora73200934649prime

File created

SQLgt shut immediate

ORA-01507 database not mounted

ORACLE instance shut down

SQLgt startup upgrade

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

ORA-01990 error opening password file (create password file)

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 18: Manual Database Up Gradation From 9

SQLgt conn as sysdba

Connected

SQLgt rdquoCDocuments and SettingsAdministratorDesktopsyssqltxtrdquo

(Syssqltxt contains sysaux tablespace script as shown below)

create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

size 70M reuse

extent management local

segment space management auto

online

Tablespace created

SQLgt Eoracleproduct1010db_1RDBMSADMINu0902000sql

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database server version is not correct for this script

DOCgt Shutdown ABORT and use a different script or a different server

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statement will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 19: Manual Database Up Gradation From 9

DOCgt error if the database has not been opened for UPGRADE

DOCgt

DOCgt Perform a ldquoSHUTDOWN ABORTrdquo and

DOCgt restart using UPGRADE

DOCgt

DOCgt

DOCgt

no rows selected

DOCgt

DOCgt

DOCgt The following statements will cause an ldquoORA-01722 invalid numberrdquo

DOCgt error if the SYSAUX tablespace does not exist or is not

DOCgt ONLINE for READ WRITE PERMANENT EXTENT MANAGEMENT LOCAL and

DOCgt SEGMENT SPACE MANAGEMENT AUTO

DOCgt

DOCgt The SYSAUX tablespace is used in 101 to consolidate data from

DOCgt a number of tablespaces that were separate in prior releases

DOCgt Consult the Oracle Database Upgrade Guide for sizing estimates

DOCgt

DOCgt Create the SYSAUX tablespace for example

DOCgt

DOCgt create tablespace SYSAUX datafile lsquosysaux01dbfrsquo

DOCgt size 70M reuse

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 20: Manual Database Up Gradation From 9

DOCgt extent management local

DOCgt segment space management auto

DOCgt online

DOCgt

DOCgt Then rerun the u0902000sql script

DOCgt

DOCgt

DOCgt

no rows selected

no rows selected

no rows selected

no rows selected

no rows selected

Session altered

Session altered

The script will run according to the size of the databasehellip

All packagesscriptssynonyms will be upgraded

At last it will show the message as follows

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

1 row selected

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 21: Manual Database Up Gradation From 9

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 22: Manual Database Up Gradation From 9

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

PLSQL procedure successfully completed

COMP_ID COMP_NAME STATUS VERSION

mdashmdashmdash- mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash mdashmdashmdashndash mdashmdashmdash-

CATALOG Oracle Database Catalog Views VALID 101020

CATPROC Oracle Database Packages and Types VALID 101020

JAVAVM JServer JAVA Virtual Machine VALID 101020

XML Oracle XDK VALID 101020

CATJAVA Oracle Database Java Packages VALID 101020

XDB Oracle XML Database VALID 101020

OWM Oracle Workspace Manager VALID 101020

ODM Oracle Data Mining VALID 101020

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 23: Manual Database Up Gradation From 9

APS OLAP Analytic Workspace VALID 101020

AMD OLAP Catalog VALID 101020

XOQ Oracle OLAP API VALID 101020

ORDIM Oracle interMedia VALID 101020

SDO Spatial VALID 101020

CONTEXT Oracle Text VALID 101020

WK Oracle Ultra Search VALID 101020

15 rows selected

DOCgt

DOCgt

DOCgt

DOCgt The above query lists the SERVER components in the upgraded

DOCgt database along with their current version and status

DOCgt

DOCgt Please review the status and version columns and look for

DOCgt any errors in the spool log file If there are errors in the spool

DOCgt file or any components are not VALID or not the current version

DOCgt consult the Oracle Database Upgrade Guide for troubleshooting

DOCgt recommendations

DOCgt

DOCgt Next shutdown immediate restart for normal operation and then

DOCgt run utlrpsql to recompile any invalid application objects

DOCgt

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 24: Manual Database Up Gradation From 9

DOCgt

DOCgt

DOCgt

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP DBUPG_END 2009-08-22 225909

1 row selected

SQLgt shut immediate

Database closed

Database dismounted

ORACLE instance shut down

SQLgt startup

ORACLE instance started

Total System Global Area 239075328 bytes

Fixed Size 788308 bytes

Variable Size 212859052 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted

Database opened

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 25: Manual Database Up Gradation From 9

776

1 row selected

SQLgt Eoracleproduct1010db_1RDBMSADMINutlu101ssql

PLSQL procedure successfully completed

Oracle Database 101 Upgrade Status Tool 22-AUG-2009 111836

ndashgt Oracle Database Catalog Views Normal successful completion

ndashgt Oracle Database Packages and Types Normal successful completion

ndashgt JServer JAVA Virtual Machine Normal successful completion

ndashgt Oracle XDK Normal successful completion

ndashgt Oracle Database Java Packages Normal successful completion

ndashgt Oracle XML Database Normal successful completion

ndashgt Oracle Workspace Manager Normal successful completion

ndashgt Oracle Data Mining Normal successful completion

ndashgt OLAP Analytic Workspace Normal successful completion

ndashgt OLAP Catalog Normal successful completion

ndashgt Oracle OLAP API Normal successful completion

ndashgt Oracle interMedia Normal successful completion

ndashgt Spatial Normal successful completion

ndashgt Oracle Text Normal successful completion

ndashgt Oracle Ultra Search Normal successful completion

No problems detected during upgrade

PLSQL procedure successfully completed

SQLgt Eoracleproduct1010db_1RDBMSADMINutlrpsql

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 26: Manual Database Up Gradation From 9

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_BGN 2009-08-22 231907

1 row selected

PLSQL procedure successfully completed

TIMESTAMP

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndash

COMP_TIMESTAMP UTLRP_END 2009-08-22 232013

1 row selected

PLSQL procedure successfully completed

PLSQL procedure successfully completed

SQLgt select count() from dba_objects where status=rsquoINVALIDrsquo

COUNT()

mdashmdashmdash-

0

1 row selected

SQLgt select from V$version

BANNER

mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash-

Oracle Database 10g Enterprise Edition Release 101020 ndash Prod

PLSQL Release 101020 ndash Production

CORE 101020 Production

TNS for 32-bit Windows Version 101020 ndash Production

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 27: Manual Database Up Gradation From 9

NLSRTL Version 101020 ndash Production

5 rows selected

Check the Database that everything is working fine

Comment

Duplicate Database With RMAN Without Connecting To Target Database

Filed under Duplicate database without connecting to target database using backups taken from RMAN on alternate host by Deepak mdash 3 Comments February 24 2010

Duplicate Database With RMAN Without Connecting To Target Database ndash from metalink Id 7326241

hi

Just wanted to share this topic

How to do duplicate database without connecting to target database using backups taken from RMAN on alternate hostSolutionFollow the below steps1)Export ORACLE_SID=ltSID Name as of productiongt

create initora file and give db_name=ltdbname of productiongt and control_files=ltlocation where youwant controlfile to be restoredgt

2)Startup nomount pfile=ltpath of initoragt

3)Connect to RMAN and issue command

RMANgtrestore controlfile from lsquoltbackuppiece of controlfile which you took on productiongt

controlfile should be restored

4) Issue ldquoalter database mountrdquoMake sure that backuppieces are on the same location where it were there on production db If youdont have the same location then make RMAN aware of the changed location using ldquocatalogrdquo command

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 28: Manual Database Up Gradation From 9

RMANgtcatalog backuppiece ltpiece name and pathgtIf there are more backuppieces than they can be cataloged using command RMANgtcatalog start with ltpath where backuppieces are storedgt5) After catalogging backuppiece issue ldquorestore databaserdquo command If you need to restore datafiles to a location different to the one recorded in controlfile use SET NEWNAME command as belowrun set newname for datafile 1 to lsquonewLocationsystemdbfrsquoset newname for datafile 2 to lsquonewLocationundotbsdbfrsquohelliprestore databaseswitch datafile all

Comment

Features introduced in the various Oracle server releases

Filed under Features Of Various release of Oracle Database by Deepak mdash Leave a comment February 2 2010

Features introduced in the various server releasesSubmitted by admin on Sun 2005-10-30 1402

This document summarizes the differences between Oracle Server releases

Most DBArsquos and developers work with multiple versions of Oracle at any particular time This document describes the high level features introduced with each new version of the Oracle database It is intended to be used as a quick reference as to whether a feature can be implemented or if a upgrade is required

Oracle 10g Release 2 (1020) ndash September 2005

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (1010)

Grid computing ndash an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 29: Manual Database Up Gradation From 9

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row transaction table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (Eg Windows to Unix) New lsquodrop databasersquo statement New database scheduler ndash DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ndash faster data movement with expdp and impdp

Oracle 9i Release 2 (920)

Locally Managed SYSTEM tablespaces Oracle Streams ndash new data sharingreplication feature (can potentially replace Oracle

Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ndash only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required) Create logical standby databases with Data Guard Java JDK 13 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ndash logical copy of primary database

automatic failover Security Improvements ndash Default Install Accounts locked VPD on synonyms AES

Migrate Users to Directory

Oracle 9i Release 1 (901) ndash June 2001

Traditional rollback segments (RBS) are still available but can be replaced with automated System Managed Undo (SMU) Using SMU Oracle will create itrsquos own ldquoRollback Segmentsrdquo and size them automatically without any DBA involvement

Flashback query (dbms_flashbackenable) ndash one can query data as it looked at some point in the past This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore

Use Oracle Ultra Search for searching databases file systems etc The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed

Oracle Nameserver is still available but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server) A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server

Oracle Parallel Serverrsquos (OPS) scalability was improved ndash now called Real Application Clusters (RAC) Full Cache Fusion implemented Any application can scale in a database cluster Applications doesnrsquot need to be cluster aware anymore

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 30: Manual Database Up Gradation From 9

The Oracle Standby DB feature renamed to Oracle Data Guard New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations The Data Guard Broker allows single step fail-over when disaster strikes

Scrolling cursor support Oracle9i allows fetching backwards in a result set Dynamic Memory Management ndash Buffer Pools and shared pool can be resized on-the-fly

This eliminates the need to restart the database each time parameter changes were made On-line table and index reorganization VI (Virtual Interface) protocol support an alternative to TCPIP available for use with

Oracle Net (SQLNet) VI provides fast communications between components in a cluster

Build in XML Developers Kit (XDK) New data types for XML (XMLType) URIrsquos etc XML integrated with AQ

Cost Based Optimizer now also consider memory and CPU not only disk access cost as before

PLSQL programs can be natively compiled to binaries Deep data protection ndash fine grained security and auditing Put security on DB level SQL

access do not mean unrestricted access Resumable backups and statements ndash suspend statement instead of rolling back

immediately List Partitioning ndash partitioning on a list of values ETL (eXtract transformation load) Operations ndash with external tables and pipelining OLAP ndash Express functionality included in the DB Data Mining ndash Oracle Darwinrsquos features included in the DB

Oracle 8i (817)

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ndash A new utility for analyzing Java Memory footprints OIS ndash Oracle Integration Server introduced PLSQL Gateway introduced for deploying PLSQL based solutions on the Web Enterprise Manager Enhancements ndash including new HTML based reporting and

Advanced Replication functionality included New Database Character Set Migration utility included

Oracle 8i (816)

PLSQL Server Pages (PSPrsquos) DBA Studio Introduced Statspack New SQL Functions (rank moving average) ALTER FREELISTS command (previously done by DROPCREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be

fixed before writing to disk

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 31: Manual Database Up Gradation From 9

XML Parser for Java New PLSQL encryptdecrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (815)

Fast Start recovery ndash Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexesindex only tables which users accessing data ndash Online index rebuilds Log Miner introduced ndash Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk IO during cross-node communication Advanced Queueing improvements (security performance OO4O support User Security Improvements ndash more centralisation single enterprise user usersroles

across multiple databases Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ndash resource classes Hash and Composite partitioned table types SQLLoader direct load API Copy optimizer statistics across databases to ensure same access paths across different

environments Standby Database ndash Auto shipping and application of redo logs Read Only queries on

standby database allowed Enterprise Manager v2 delivered NLS ndash Euro Symbol supported Analyze tables in parallel Temporary tables supported Net8 support for SSL HTTP HOP protocols Transportable tablespaces between databases Locally managed tablespaces ndash automatic sizing of extents elimination of tablespace

fragmentation tablespace information managed in tablespace (ie moved from data dictionary) improving tablespace reliability

Drop Column on table (Finally ) DBMS_DEBUG PLSQL package DBMS_SQL replaced by new EXECUTE

IMMEDIATE statement Progress Monitor to track long running DML DDL Functional Indexes ndash NLS case insensitive descending

Oracle 80 ndash June 1997

Object Relational database Object Types (not just date character number as in v7 SQL3 standard Call external procedures LOB gt1 per table

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 32: Manual Database Up Gradation From 9

Partitioned Tables and Indexes exportimport individual partitions partitions in multiple tablespaces Onlineoffline backuprecover individual partitions mergebalance partitions Advanced Queuing for message handling Many performance improvements to SQLPLSQLOCI making more efficient use of

CPUMemory V7 limits extended (eg 1000 columnstable 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently re-

establishes the connection when needed) to support more concurrent users Improved ldquoSTARrdquo Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM

in v7) Performance improvements in OPS ndash global V$ views introduced across all instances

transparent failover to a new node Data Cartridges introduced on database (eg image video context time spatial) BackupRecovery improvements ndash Tablespace point in time recovery incremental

backups parallel backuprecovery Recovery manager introduced Security Server introduced for central user administration User password expiry

password profiles allow custom password scheme Privileged database links (no need for password to be stored)

Fast Refresh for complex snapshots parallel replication PLSQL replication code moved in to Oracle kernel Replication manager introduced

Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of

statement) SQLNet replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 73

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ndash Coalesce Temporary Permanent

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 33: Manual Database Up Gradation From 9

Trigger compilation debug Unlimited extents on STORAGE clause Some initora parameters modifiable ndash TIMED_STATISTICS HASH Joins Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PLSQL ndash UTL_FILE

Oracle 72

Resizable autoextend data files Shrink Rollback Segments manually Create table index UNRECOVERABLE Subquery in FROM clause PLSQL wrapper PLSQL Cursor variables Checksums ndash DB_BLOCK_CHECKSUM LOG_BLOCK_CHECKSUM Parallel create table Job Queues ndash DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ndash SORT_DIRECT_WRITES

Oracle 71

ANSIISO SQL92 Entry Level Advanced Replication ndash Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ndash DBMS_SQL Parallel Query Options ndash query index creation data loading Server Manager introduced Read Only tablespaces

Oracle 70 ndash June 1992

Database Integrity Constraints (primary foreign keys check constraints default values) Stored procedures and functions procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ndash mirrored online redo log files Resource Limits ndash Profiles

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 34: Manual Database Up Gradation From 9

Much enhanced Auditing Enhanced Distributed database functionality ndash INSERTS UPDATESDELETES 2PC Incomplete database recovery (eg SCN) Cost based optimiser TRUNCATE tables Datatype changes (ie VARCHAR2 CHAR VARCHAR) SQLNet v2 MTS Checkpoint process Data replication ndash Snapshots

Oracle 62

Oracle Parallel Server

Oracle 6 ndash July 1988

Row-level locking On-line database backups PLSQL in the database

Oracle 51

Distributed queries

Oracle 50 ndash 1986

Supporting for the Client-Server model ndash PCrsquos can access the DB on remote host

Oracle 4 ndash 1984

Read consistency

Oracle 3 ndash 1981

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)

Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 ndash 1979

First public release Basic SQL functionality queries and joins

Tags httpwwworafaqcomfaqfeatures_introduced_in_the_various_server_releasesComment

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 35: Manual Database Up Gradation From 9

Schema Referesh

Filed under Schema refresh by Deepak mdash 1 Comment December 15 2009

Steps for sehema refresh

Schema refresh in oracle 9i

Now we are going to refresh SH schema

Steps for schema refresh ndash before exporting

Spool the output of roles and privileges assigned to the user use the query below to view the role s and privileges and spool the out as sql file

1 SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

2 Verify total no of objects from above query3 write a dynamic query as below4 select lsquogrant lsquo || privilege ||rsquo to shrsquo from session_privs5 select lsquogrant lsquo || role ||rsquo to shrsquo from session_roles6 query the default tablespace and size7 select tablespace_namesum(bytes10241024) from dba_segments where owner=rsquoSHrsquo

group by tablespace_name

Export the lsquoshrsquo schema

exp lsquousernmaepassword file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_explogrsquo owner=rsquoSHrsquo direct=y

steps to drrop and recreate schema

Drop the SH schema

1 Create the SH schema with the default tablespace and allocate quota on that tablespace2 Now run the roles and privileges spooled scripts3 Connect the SH and verify the tablespace roles and privileges4 then start importing

Importing The lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 36: Manual Database Up Gradation From 9

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh by dropping objects and truncating objects

Export the lsquoshrsquo schema

Take the schema full export as show above

Drop all the objects in lsquoSHrsquo schema

To drop the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool drop_tablessql

SQLgtselect lsquodrop table lsquo||table_name||rsquo cascade constraints purgersquo from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool drop_other_objectssql

SQLgtselect lsquodrop lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be dropped

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 37: Manual Database Up Gradation From 9

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

execdbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

To enable constraints use the query below

SELECT lsquoALTER TABLE lsquo||TABLE_NAME||rsquoENABLE CONSTRAINT lsquo||CONSTRAINT_NAME||rsquoFROM USER_CONSTRAINTS

WHERE STATUS=rsquoDISABLEDrsquo

Truncate all the objects in lsquoSHrsquo schema

To truncate the all the objects in the Schema

Connect the schema

Spool the output

SQLgtset head off

SQLgtspool truncate_tablessql

SQLgtselect lsquotruncate table lsquo||table_name from user_tables

SQLgtspool off

SQLgtset head off

SQLgtspool truncate_other_objectssql

SQLgtselect lsquotruncate lsquo||object_type||rsquo lsquo||object_name||rsquorsquo from user_objects

SQLgtspool off

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 38: Manual Database Up Gradation From 9

Now run the script all the objects will be truncated

Disabiling the reference constraints

If there is any constraint violation while truncating use the below query to find reference key constraints and disable them Spool the output of below query and run the script

Select constraint_nameconstraint_typetable_name FROM ALL_CONSTRAINTS

where constraint_type=rsquoRrsquo

and r_constraint_name in (select constraint_name from all_constraints

where table_name=rsquoTABLE_NAMErsquo)

Importing THE lsquoSHrsquo schema

Imp lsquousernmaepasswordrsquo file=rsquolocationsh_bkpdmprsquo log=rsquolocationsh_implogrsquo

Fromuser=rsquoSHrsquo touser=rsquoSHrsquo

SQLgt SELECT object_typecount() from dba_objects where owner=rsquoSHTESTrsquo group by object_type

Compiling and analyzing SH Schema

exec dbms_utilitycompile_schema(lsquoSHrsquo)

exec dbms_utilityanalyze_schema(lsquoSHrsquoESTIMATErsquoESTIMATE_PERCENT=gt20)

Now connect the SH user and check for the import data

Schema refresh in oracle 10g

Here we can use Datapump

Exporting the SH schema through Datapump

expdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

Dropping the lsquoSHrsquo user

Query the default tablespace and verify the space in the tablespace and drop the user

SQLgtDrop user SH cascade

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 39: Manual Database Up Gradation From 9

Importing the SH schema through datapump

impdp lsquousernamepasswordrsquo dumpfile=sh_expdmp directory=data_pump_dir schemas=sh

If you are importing to different schema use remap_schema option

Check for the imported objects and compile the invalid objects

Comment

JOB SCHEDULING

Filed under JOB SCHEDULING by Deepak mdash Leave a comment December 15 2009

CRON JOB SCHEDULING ndashIN UNIX

To run system jobs on a dailyweeklymonthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed via the creation of some special directories

etccrondetccrondailyetccronhourlyetccronmonthlyetccronweekly

Except for the first one which is special these directories allow scheduling of system-wide jobs in a coarse manner Any script which is executable and placed inside them will run at the frequency which its name suggests

For example if you place a script inside etccrondaily it will be executed once per day every day

The time that the scripts run in those system-wide directories is not something that an administration typically changes but the times can be adjusted by editing the file etccrontab The format of this file will be explained shortly

The normal manner which people use cron is via the crontab command This allows you to view or edit your crontab file which is a per-user file containing entries describing commands to execute and the time(s) to execute them

To display your file you run the following command

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 40: Manual Database Up Gradation From 9

crontab -l

root can view any users crontab file by adding ldquo-u usernameldquo for example

crontab -u skx -l List skxs crontab file

The format of these files is fairly simple to understand Each line is a collection of six fields separated by spaces

The fields are

1 The number of minutes after the hour (0 to 59)2 The hour in military time (24 hour) format (0 to 23)3 The day of the month (1 to 31)4 The month (1 to 12)5 The day of the week(0 or 7 is Sun or use name)6 The command to run

More graphically they would look like this

Command to be executed- - - - -| | | | || | | | +----- Day of week (0-7)| | | +------- Month (1 - 12)| | +--------- Day of month (1 - 31)| +----------- Hour (0 - 23)+------------- Min (0 - 59)

(Each of the first five fields contains only numbers however they can be left as lsquorsquo characters to signify any value is acceptible)

Now that wersquove seen the structure we should try to ru na couple of examples

To edit your crontabe file run

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary) When you save the file and quit your editor it will be installed into the system unless it is found to contain errors

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this

export EDITOR=usrbinemacscrontab -e

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 41: Manual Database Up Gradation From 9

Now enter the following

0 binls

When yoursquove saved the file and quit your editor you will see a message such as

crontab installing new crontab

You can verify that the file contains what you expect with

crontab -l

Here wersquove told the cron system to execute the command ldquobinlsrdquo every time the minute equals 0 ie Wersquore running the command on the hour every hour

Any output of the command you run will be sent to you by email if you wish to stop this then you should cause it to be redirected as follows

0 binls gtdevnull 2ampgt1

This causes all output to be redirected to devnull ndash meaning you wonrsquot see it

Now wersquoll finish with some more examples

Run the `something` command every hour on the hour0 sbinsomething

Run the `nightly` command at ten minutes past midnight every day10 0 binnightly

Run the `monday` command every monday at 2 AM0 2 1 usrlocalbinmonday

One last tip If you want to run something very regularly you can use an alternate syntax Instead of using only single numbers you can use ranges or sets

A range of numbers indicates that every item in that range will be matched if you use the following line yoursquoll run a command at 1AM 2AM 3AM and 4AM

Use a range of hours matching 1 2 3 and 4AM 1-4 binsome-hourly

A set is similar consisting of a collection of numbers seperated by commas each item in the list will be matched The previous example would look like this using sets

Use a set of hours matching 1 2 3 and 4AM 1234 binsome-hourly

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 42: Manual Database Up Gradation From 9

JOB SCHEDULING IN WINDOWS

Cold backup ndash scheduling in windows environment

Create a batch file as cold_bkpbat

echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy E Y EoracleoradataHRMS Ddaily_bkp_coldbackuphrms

xcopy E Y Eoracleora92database Ddaily_bkp registrydatabase

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

Save the file as cold_bkpbatGoto start -gt control panel -gt scheduled tasks

1 Click on add a scheduled tasks2 Click next and browse your cold_bkpbat file3 Give a name for the backup and schedule the timings4 It will ask for os user name and password5 Click next and finish the scheduling

Note

Whenever the os user name and password are changed reschedule the scheduled tasks If you donrsquot reschedule it the job wonrsquot run So edit the scheduled tasks and enter the new password

Comment

Steps to switchover standby to primary

Filed under Switchover primary to standby in 10g by Deepak mdash 1 Comment December 15 2009

SWITCHOVER PRIMARY TO STANDBY DATABASE

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 43: Manual Database Up Gradation From 9

Primary =PRIM

Standby = STAN

I Before Switchover

1 As I always recommend test the Switchover first on your testing systems before working on Production

2 Verify the primary database instance is open and the standby database instance is mounted

3 Verify there are no active users connected to the databases

4 Make sure the last redo data transmitted from the Primary database was applied on the standby database Issue the following commands on Primary database and Standby database to find outSQLgtselect sequence applied from v$archvied_logPerform SWITCH LOGFILE if necessary

In order to apply redo data to the standby database as soon as it is received use Real-time apply

II Quick Switchover Steps

1 Initiate the switchover on the primary database PRIMSQLgtconnect PRIM as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

2 After step 1 finishes Switch the original physical standby db STAN to primary roleOpen another prompt and connect to SQLPLUSSQLgtconnect STAN as sysdbaSQLgt ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

3 Immediately after issuing command in step 2 shut down and restart the former primary instance PRIMSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP MOUNT

4 After step 3 completes- If you are using Oracle Database 10g release 1 you will have to Shut down and restart the new primary database STANSQLgtSHUTDOWN IMMEDIATESQLgtSTARTUP

- If you are using Oracle Database 10g release 2 you can open the new Primary database STANSQLgtALTER DATABASE OPEN

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 44: Manual Database Up Gradation From 9

STAN is now transitioned to the primary database role

5 On the new primary database STAN perform a SWITCH LOGFILE to start sending redo data to the standby database PRIMSQLgtALTER SYSTEM SWITCH LOGFILE

Comment

Encryption with Oracle Data Pump

Filed under Encryption with Oracle Datapump by Deepak mdash Leave a comment December 14 2009

Encryption with Oracle Data Pump

- from Oracle White paper

Introduction

The security and compliance requirements in todayrsquos business world present manifold challenges As incidences of data theft increase protecting data privacy continues to be of paramount importance Now a de facto solution in meeting regulatory compliances data encryption is one of a number of security tools in use The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works Please note that this paper does not apply to the Original ExportImport utilities For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set refer to the Oracle Data Pump Encrypted Dump File Support whitepaper

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word Once a table column is marked with this keyword encryption and decryption are performed automatically without the need for any further user or application intervention The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column When an authorized user inserts new data into such a column TDE column encryption encrypts this data prior to storing it in the database Conversely when the user selects the column from the database TDE column encryption transparently decrypts this data back to its original clear text

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 45: Manual Database Up Gradation From 9

format Column data encrypted using TDE remains protected while it resides in the database However the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it using a dump file encryption key derived from a userprovided password before it is written to the export dump file set Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set Whenever Oracle Data Pump unloads or loads tables containing encrypted columns it uses the external tables mechanism instead of the direct path mechanism The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer

The steps involved in exporting a table with encrypted columns are as follows

1 Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database

2 As part of the SELECT operation TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key

3 Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set To load an export dump file set containing encrypted column data into a target database the same encryption password used at export time must be provided to Oracle Data Pump import After verifying that the correct password has been given the corresponding dump file decryption key is derived from this password

The steps involved in importing a table with encrypted columns are as follows

1 Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key

2 Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column

3 As part of the INSERT operation TDE automatically encrypts the column data using the column encryption key and then writes it to the database

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job Although the data being processed is stored in memory buffers encryption and decryption are typically CPU intensive operations Furthermore additional disk IO is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks

Keep in mind that in Oracle Data Pump 10g release 2 the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 46: Manual Database Up Gradation From 9

parameter applies only to TDE encrypted columns Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns it is first necessary to create an Oracle Encryption Wallet which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key In the following example the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the walletNext create a table with an encrypted column The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key If the

IDENTIFIED BY clause is omitted then TDE creates the tables column encryption key based on random data

SQLgt ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ldquowallet_pwdrdquo

SQLgt CREATE TABLE DPEMP (empid NUMBER(6)empname VARCHAR2(100)salary NUMBER(82) ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table In the following example the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key Oracle Data Pump re-encrypts the column data in the dump file using this dump file key When re-encrypting encrypted column data Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128)Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error This is shown in the following example in which the Oracle Wallet is manually closed and then the export command is re-issued

Although the ENCRYPTION_PASSWORD is an optional parameter it is always prudent to export encrypted columns using a password In the event that the password is not specified Oracle Data Pump writes the encrypted column data as clear text in the dump file In such a case a warning message (ORA-39173) is displayed as shown in the following example

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 47: Manual Database Up Gradation From 9

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQLgt ALTER SYSTEM SET WALLET CLOSE

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd

Export Release 102040 ndash Production on Monday 09 July 2009

82123

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39001 invalid argument value

ORA-39180 unable to encrypt ENCRYPTION_PASSWORD

ORA-28365 wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports as used in the previous

examples If a schema tablespace or full mode export is performed then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter

There is however one exception transportable tablespace export mode does not support

encrypted columns An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

Export Release 102040 ndash Production on Wednesday 09 July 2009

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 48: Manual Database Up Gradation From 9

84843

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS methodhellip

Processing object type TABLE_EXPORTTABLETABLE_DATA

Total estimation using BLOCKS method 16 KB

Processing object type TABLE_EXPORTTABLETABLE

exported ldquoDPrdquordquoEMPrdquo 625 KB 3 rows

ORA-39173 Encrypted data has been stored unencrypted in dump file

set

Master table ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime successfully loadedunloaded

Dump file set for DPSYS_EXPORT_TABLE_01 is

adejkaloger_lx9oracleworkempdmp

Job ldquoDPrdquordquoSYS_EXPORT_TABLE_01Prime completed with 1 error(s) at 084857

$ expdp systempassword DIRECTORY=dpump_dir DUMPFILE=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 102040 ndash Production on Thursday 09 July 2009

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 49: Manual Database Up Gradation From 9

85507

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-29341 The transportable set is not self-contained

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 085525

The ORA-29341 error in the previous example is not very informative If the same transportable

tablespace export is executed using Oracle Database 11g release 1 that version does a better job

at pinpointing the problem via the information in the ORA-39929 error

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data Otherwise an 1048756ORA-28365 wallet not open1048756 error is returned Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place Of course the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export

If the encryption attributes for all columns do not exactly match between the source and target tables then an ORA-26033 exception is raised when you try to import the export dump file set In the example of the DPEMP table the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed For example assume in the following example that the DPEMP table on the target system has been created exactly as it is on the source system except that the

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 50: Manual Database Up Gradation From 9

ENCRYPT attribute has not been assigned to the SALARY column The output and resulting error messages would look as follows

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp systempassword DIRECTORY=dpump_dir dumpfile=dpdmp

TRANSPORT_TABLESPACES=dp

Export Release 111070 ndash Production on Thursday 09 July 2009

90900

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 11g Enterprise Edition Release

111070 ndash Production

With the Partitioning Data Mining and Real Application Testing

Options Starting ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime system

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123 Data Pump transportable tablespace job aborted

ORA-39187 The transportable set is not self-contained violation list

is ORA-39929 Table DPEMP in tablespace DP has encrypted columns which

are not supported

Job ldquoSYSTEMrdquordquoSYS_EXPORT_TRANSPORTABLE_01Prime stopped due to fatal error

at 090921

Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 51: Manual Database Up Gradation From 9

ENCRYPTION_PASWORD parameter is prohibited in network mode imports as shown in the

following example

$ impdp dpdp TABLES=dpemp DIRECTORY=dpump_dir NETWORK_LINK=remote

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import Release 102040 ndash Production on Friday 09 July 2009

110057

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release

102040 ndash Production

With the Partitioning Data Mining and Real Application Testing

options

ORA-39005 inconsistent arguments

ORA-39115 ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp TABLES=emp

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import Release 102040 ndash Production on Thursday 09 July 2009

105540

Copyright (c) 2003 2007 Oracle All rights reserved

Connected to Oracle Database 10g Enterprise Edition Release 102040 -

Production

With the Partitioning Data Mining and Real Application Testing options

Master table ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime successfully loadedunloaded

Starting ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime dp directory=dpump_dir

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 52: Manual Database Up Gradation From 9

dumpfile=empdmp tables=emp encryption_password=

table_exists_action=append

Processing object type TABLE_EXPORTTABLETABLE

ORA-39152 Table ldquoDPrdquordquoEMPrdquo exists Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORTTABLETABLE_DATA

ORA-31693 Table data object ldquoDPrdquordquoEMPrdquo failed to loadunload and is being

skipped due to error

ORA-02354 error in exportingimporting data

ORA-26033 column ldquoEMPrdquoSALARY encryption properties differ for source or

target table

Job ldquoDPrdquordquoSYS_IMPORT_TABLE_01Prime completed with 2 error(s) at 105548

Oracle White PaperEncryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import However it is important to understand that any TDE column data will be transmitted in clear-text format If you are concerned about the security of the information being transmitted then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed The following are cases in which the encryption password and Oracle Wallet are not needed

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with

encrypted columns A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 53: Manual Database Up Gradation From 9

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp) As is always the case when dealing with TDE columns the Oracle Wallet must first be open before creating the external table The following example creates an external table called DPXEMP and populates it using the data in the DPEMP table Notice that datatypes for the columns are not specified This is because they are determined by the column datatypes in the source table in the SELECT subquery

SQLgt CREATE TABLE DPXEMP (

empid

empname

salary ENCRYPT IDENTIFIED BY ldquocolumn_pwdrdquo)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (rsquoxempdmprsquo)

)

REJECT LIMIT UNLIMITED

AS SELECT FROM DPEMP

The steps involved in creating an external table with encrypted columns are as follows

1 The SQL engine selects the data for the table DPEMP from the database If any columns in the table are marked as encrypted as the salary column is for DPEMP then TDE decrypts the column data as part of the select operation

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 54: Manual Database Up Gradation From 9

2 The SQL engine then inserts the data which is in clear text format into the DPXEMP table If any columns in the external table are marked as encrypted as one of its columns is then TDE encrypts this column data as part of the insert operation

3 Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed However the data in the external table can be selected any number of times using a simple SQL SELECT statement The steps involved in selecting data with encrypted columns from an external table are as follows

1 The SQL engine initiates a select operation Because DPXEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file

2 The data is passed back to the SQL engine If any columns in the external table are marked as encrypted as one of its columns is then TDE decrypts the data as part of the select operation The use of the encryption password in the IDENTIFIED BY clause is optional unless you plan to move the dump file to another database In that case the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file Encryption Parameter Change in 11g Release 1

As previously discussed in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD So by default if the ENCRYPTION_PASSWORD is present on the command line then it applies only to TDE encrypted columns (if there are no such columns being exported then the parameter is ignored)

SQLgt SELECT FROM DPXEMP

Beginning in Oracle Database 11g release 1 the ability to encrypt the entire export dump file set is introduced and with it several new encrypted-related parameters A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted To encrypt only TDE columns using Oracle Data Pump 11g it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY So the 10g example previously shown becomes the following in 11g

$ expdp dpdp DIRECTORY=dpump_dir DUMPFILE=empdmp

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 55: Manual Database Up Gradation From 9

Comment

DATAPUMP

Filed under DATAPUMP Oracle 10g by Deepak mdash Leave a comment December 14 2009

DATAPUMP IN ORACLE

For using DATAPUMP through DB CONSOLE

httpwwworaclecomtechnologyobeobe10gdbstoragedatapumpdatapumphtm

There are two new concepts in Oracle Data Pump that are different from original Export and Import

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes These server processes access files for the Data Pump jobs using directory objects that identify the location of the files The directory objects enforce a security model that can be used by DBAs to control access to these files

Interactive Command-Line Mode

Besides regular operating system command-line mode there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations Changing from Original ExportImport to Oracle Data Pump Creating Directory Objects

In order to use Data Pump the database administrator must create a directory object and grant privileges to the user on that directory object If a directory object is not specified a default directory object called data_pump_dir is provided The default data_pump_dir is available only to privileged users unless access is granted by the DBA

In the following example the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at usrappsdatafiles

Create a directory

1 SQLgt CREATE DIRECTORY dpump_dir1 AS lsquousrappsdatafilesrsquo

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 56: Manual Database Up Gradation From 9

After a directory is created you need to grant READ and WRITE permission on the directory to other users For example to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1 you must execute the following command

1 SQLgt GRANT READWRITE ON DIRECTORY dpump_dir1 TO scott

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges Similarly the Oracle database requires permission from the operating system to read and write files in the directories Once the directory access is granted the user scott can export his database objects with command arguments

1 gtexpdp usernamepassword DIRECTORY=dpump_dir1 dumpfile=scottdmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands but are different Below are a few examples that demonstrate some of these

differences

1) Example import of tables from scottrsquos account to jimrsquos account

Original Import

gt imp usernamepassword FILE=scottdmp FROMUSER=scott TOUSER=jim TABLES=()

Data Pump Import

gt impdp usernamepassword DIRECTORY=dpump_dir1 DUMPFILE=scottdmp

TABLES=scottemp REMAP_SCHEMA=scottjim

Note how the FROMUSERTOUSER syntax is replaced by the REMAP_SCHEMA option

2) Example export of an entire database to a dump file with all GRANTS

INDEXES and data

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 57: Manual Database Up Gradation From 9

gt exp usernamepassword FULL=y FILE=dbadmp GRANTS=y INDEXES=y ROWS=y

gt expdp usernamepassword FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dbadmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job You cannot mix the two parameters in one job

Both parameters work with Data Pump Import as well and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file

3) Tuning Parameters

Unlike original Export and Import which used the BUFFER COMMIT COMPRESS

CONSISTENT DIRECT and RECORDLENGTH parameters Data Pump needs no tuning to achieve maximum performance Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner Initialization parameters should be sufficient upon installation

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import With original Export you had to run an older version of Export to produce a dump file that was compatible with an older database versionWith Data Pump you use the current Export version and simply use the VERSION parameter to specify the target database version You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g)

Example

gt expdp usernamepassword TABLES=hremployees VERSION=101

DIRECTORY=dpump_dir1 DUMPFILE=empdmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export

Note that Data Pump Import cannot read dump files produced by original Export

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 58: Manual Database Up Gradation From 9

Data Pump works great with default parameters but once you are comfortable with Data

Pump there are new capabilities that you will want to explore

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job which is much more efficient that the client-side execution of original Export and Import Now Data Pump operations can take advantage of the serverrsquos parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database)

The number of parallel processes can be changed on the fly using Data Pumprsquos interactive command-line mode You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa)

For best performance you should do the following

bull Make sure your system is well balanced across CPU memory and IO

bull Have at least one dump file for each degree of parallelism If there arenrsquot enough dump Files performance will not be optimal because multiple threads of execution will be trying to access the same dump file

bull Put files that are members of a dump file set on separate disks so that they will be written and read in parallel

bull For export operations use the U variable in the DUMPFILE parameter so multiple dump files can be automatically generated

Example

gt expdp usernamepassword DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_expudmp PARALLEL=4

REMAP

bull REMAP_TABLESPACE ndash This allows you to easily import a table into a different

tablespace from which it was originally exported The databases have to be 101 or later

Example

gt impdp usernamepassword REMAP_TABLESPACE=tbs_1tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 59: Manual Database Up Gradation From 9

DIRECTORY=dpumpdir1 DUMPFILE=employeesdmp

bull REMAP_DATAFILES ndash This is a very useful feature when you move databases between platforms that have different file naming conventions This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced Because the REMAP_DATAFILE value uses quotation marks itrsquos best to specify the parameter within a parameter file

Example

The parameter file payrollpar has the following content

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_fulldmp

REMAP_DATAFILE=rdquorsquoCDB1HRDATAPAYROLLtbs6dbfrsquorsquodb1hrdatapayrolltbs6dbf

You can then issue the following command

gt impdp usernamepassword PARFILE=payrollpar

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable A couple of prominent features are described hereInteractive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode Because Data Pump jobs run entirely on the server you can start an export or import job detach from it and later reconnect to the job to monitor its progress Here are some of the things you can do while in this mode

See the status of the job All of the information needed to monitor the jobrsquos execution is available

Add more dump files if there is insufficient disk space for an export file Change the default size of the dump files Stop the job (perhaps it is consuming too many resources) and later restart it (when more

resources become available) Restart the job If a job was stopped for any reason (system failure power outage) you

can attach to the job and then restart it

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 60: Manual Database Up Gradation From 9

Increase or decrease the number of active worker processes for the job (Enterprise Edition only)

Attach to a job from a remote site (such as from home) to monitor status

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link) without creating a dump file on disk This is very useful if yoursquore moving data between databases like data marts to data warehouses and disk space is not readily available Note that if you are moving large volumes of data Network mode is probably going to be slower than file mode Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance Network export gives you the ability to export read-only databases (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance) This is useful when there is a need to export data from a standby database

Generating SQLFILES

In original Import the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script With Data Pump itrsquos a lot easier to get a workable DDL script When you run Data Pump Import and specify the SQLFILE parameter a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types not just tables and indexes Although this output file is ready for execution the DDL statements are not actually executed so the target system will not be changed

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output For example if you want to create a database that contains all the tables and indexes of the source database but that does not include the same constraints grantsand other metadata you would issue a command as follows

gtimpdp usernamepassword DIRECTORY=dpumpdir1 DUMPFILE=expfulldmp

SQLFILE=dpump_dir2expfullsql INCLUDE=TABLEINDEX

The SQL file named expfullsql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired

Comment

Clone Database using RMAN

Filed under Clone database using RMAN by Deepak mdash Leave a comment

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 61: Manual Database Up Gradation From 9

December 10 2009

Clone database using Rman

Target db test

Clone db clone

In target database

1Take full backup using Rman

SQLgt archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination coracleora92RDBMS

Oldest online log sequence 14

Next log sequence to archive 16

Current log sequence 16

SQLgt ho rman

Recovery Manager Release 92010 ndash Production

Copyright (c) 1995 2002 Oracle Corporation All rights reserved

RMANgt connect target

connected to target database TEST (DBID=1972233550)

RMANgt show all

using target database controlfile instead of recovery catalog

RMAN configuration parameters are

CONFIGURE RETENTION POLICY TO REDUNDANCY 1 default

CONFIGURE BACKUP OPTIMIZATION OFF default

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 62: Manual Database Up Gradation From 9

CONFIGURE DEFAULT DEVICE TYPE TO DISK default

CONFIGURE CONTROLFILE AUTOBACKUP ON

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO lsquoFrsquo default

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 default

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1 default

CONFIGURE MAXSETSIZE TO UNLIMITED default

CONFIGURE SNAPSHOT CONTROLFILE NAME TO lsquoCORACLEORA92DATABASESNCFTESTORArsquo default

RMANgt backup database plus archivelog

Starting backup at 23-DEC-08

current log archived

allocated channel ORA_DISK_1

channel ORA_DISK_1 sid=17 devtype=DISK

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=14 recid=1 stamp=674240935

input archive log thread=1 sequence=15 recid=2 stamp=674240997

input archive log thread=1 sequence=16 recid=3 stamp=674242208

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE4K307L0_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000003

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 63: Manual Database Up Gradation From 9

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

using channel ORA_DISK_1

channel ORA_DISK_1 starting full datafile backupset

channel ORA_DISK_1 specifying datafile(s) in backupset

input datafile fno=00001ORACLEORADATATESTSYSTEM01DBF

input datafile fno=00002ORACLEORADATATESTUNDOTBS01DBF

input datafile fno=00005ORACLEORADATATESTEXAMPLE01DBF

input datafile fno=00010ORACLEORADATATESTXDB01DBF

input datafile fno=00006ORACLEORADATATESTINDX01DBF

input datafile fno=00009ORACLEORADATATESTUSERS01DBF

input datafile fno=00003ORACLEORADATATESTCWMLITE01DBF

input datafile fno=00004ORACLEORADATATESTDRSYS01DBF

input datafile fno=00007ORACLEORADATATESTODM01DBF

input datafile fno=00008ORACLEORADATATESTTOOLS01DBF

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE5K307L5_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000056

Finished backup at 23-DEC-08

Starting backup at 23-DEC-08

current log archived

using channel ORA_DISK_1

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 64: Manual Database Up Gradation From 9

channel ORA_DISK_1 starting archive log backupset

channel ORA_DISK_1 specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270

channel ORA_DISK_1 starting piece 1 at 23-DEC-08

channel ORA_DISK_1 finished piece 1 at 23-DEC-08

piece handle=CORACLEORA92DATABASE6K307MU_1_1 comment=NONE

channel ORA_DISK_1 backup set complete elapsed time 000002

Finished backup at 23-DEC-08

Starting Control File and SPFILE Autobackup at 23-DEC-08

piece handle=CORACLEORA92DATABASEC-1972233550-20081223-00 comment=NONE

Finished Control File and SPFILE Autobackup at 23-DEC-08

RMANgt exit

Recovery Manager complete

SQLgt select name from v$database

NAME

mdashmdashmdash

TEST

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

In clone database

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 65: Manual Database Up Gradation From 9

1create servicepassword fileand put entries in tnsnamesora and lsnrctlora files Create all the folders neeeded for a database

2edit the pfile and add following commands

Db_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

Log_file_name_convert=rsquotarget db oradata pathrsquorsquoclone db oradata pathrsquo

3startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile

SQLgt conn as sysdba

Connected to an idle instance

SQLgt startup pfile=rsquoCoracleadminclonepfileinitcloneorarsquo nomount

ORACLE instance started

Total System Global Area 135338868 bytes

Fixed Size 453492 bytes

Variable Size 109051904 bytes

Database Buffers 25165824 bytes

Redo Buffers 667648 bytes

SQLgt ho lsnrctl status

SQLgt ho lsnrctl stop

SQLgt ho lsnrctl start

4connect rman

5rmangtconnect target syssystest(TARGET DB)

6 rmangtconnect auxiliary syssys

7 rmangtduplicate target database to lsquoclonersquo(CLONE DBNAME)

SQLgt ho rman

RMANgt connect target syssystest

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 66: Manual Database Up Gradation From 9

connected to target database TEST (DBID=1972233550)

RMANgt connect auxiliary syssys

connected to auxiliary database CLONE (not mounted)

RMANgt duplicate target database to lsquoclonersquo

Scripts will be runninghellip

SQLgt select name from v$database

select name from v$database

ERROR at line 1

ORA-01507 database not mounted

SQLgt ho rman

SQLgt alter database mount

alter database mount

ERROR at line 1

ORA-01100 database already mounted

8it will run for a while and exit from rman and open the database using reset logs

SQLgt alter database open resetlogs

Database altered

9 check for dbid

10create temporary tablespace

SQLgt select name from v$database

NAME

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 67: Manual Database Up Gradation From 9

mdashmdashmdash

CLONE

SQLgt select dbid from v$database

DBID

mdashmdashmdash-

1972233550

Comment

step by step standby database configuration in 10g

Filed under Dataguard - creation of standby database in 10g by Deepak mdash Leave a comment December 9 2009

Oracle 10g ndash Manual Creation of Physical STANDBY Database Using Data Guard

Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX serversand maintenance tips on the databases in a Data Guard Environment

Oracle 10g Data Guard is a great tool to ensure high availability data protection and disaster recovery for enterprise data I have been working on Data GuardSTANDBY databases using both Grid control and SQL command line for a couple of years and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago I maintain it daily and it works well I would like to share my experience with the other DBAs

In this example the database version is 10203 The PRIMARY database and STANDBY database are located on different machines at different sites The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY I use Flash Recovery Area and OMF

I Before you get started

1 Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same

2 Install Oracle database software without the starter database on the STANDBY server and patch it if necessary Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases and Oracle home paths are identical

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 68: Manual Database Up Gradation From 9

3 Test the STANDBY Database creation on a test environment first before working on the Production database

II On the PRIMARY Database Side

1 Enable forced logging on your PRIMARY databaseSQLgt ALTER DATABASE FORCE LOGGING

2 Create a password file if it doesnrsquot exist1) To check if a password file already exists run the following command SQLgt select from v$pwfile_users

2) If it doesnrsquot exist use the following command to create one- On Windows $cd ORACLE_HOMEdatabase$orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with the password for the SYS user)

- On UNIX$Cd $ORACLE_HOMEdbs$Orapwd file=pwdPRIMARYora password=xxxxxxxx force=y(Note Replace xxxxxxxxx with your actual password for the SYS user)

3 Configure a STANDBY Redo log1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files To find out the size of your online redo log filesSQLgt select bytes from v$log

BYTESmdashmdashmdash-524288005242880052428800

2) Use the following command to determine your current log file groupsSQLgt select group member from v$logfile

3) Create STANDBY Redo log groupsMy PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commandsSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50MSQLgtALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M

4) To verify the results of the STANDBY redo log groups creation run the following querySQLgtselect from v$STANDBY_log

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 69: Manual Database Up Gradation From 9

4 Enable Archiving on PRIMARY If your PRIMARY database is not already in Archive Log mode enable the archive log modeSQLgtshutdown immediateSQLgtstartup mountSQLgtalter database archivelogSQLgtalter database openSQLgtarchive log list

5 Set PRIMARY Database Initialization ParametersCreate a text initialization parameter file (PFILE) from the server parameter file (SPFILE) to add the new PRIMARY role parameters

1) Create pfile from spfile for the PRIMARY database- On WindowsSQLgtcreate pfile=rsquodatabasepfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgtcreate pfile=rsquodbspfilePRIMARYorarsquo from spfile(Note- specify your Oracle home path to replace lsquorsquo)

2) Edit pfilePRIMARYora to add the new PRIMARY and STANDBY role parameters (Here the file paths are from a windows system For UNIX system specify the path accordingly)

db_name=PRIMARYdb_unique_name=PRIMARYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaPRIMARYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=STANDBY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30remote_login_passwordfile=rsquoEXCLUSIVErsquoFAL_SERVER=STANDBYFAL_CLIENT=PRIMARYSTANDBY_FILE_MANAGEMENT=AUTO Specify the location of the STANDBY DB datafiles followed by the PRIMARY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYDATAFILErsquoEoracleproduct1020oradataPRIMARYDATAFILErsquo

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 70: Manual Database Up Gradation From 9

Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoEoracleproduct1020oradataPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquo

6 Create spfile from pfile and restart PRIMARY database using the new spfileData Guard must use SPFILE Create the SPFILE and restart database- On windowsSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodatabasepfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

- On UNIXSQLgt shutdown immediateSQLgt startup nomount pfile=rsquodbspfilePRIMARYorarsquoSQLgtcreate spfile from pfile=rsquodbspfilePRIMARYorarsquondash Restart the PRIMARY database using the newly created SPFILESQLgtshutdown immediateSQLgtStartup(Note- specify your Oracle home path to replace lsquorsquo)

III On the STANDBY Database Site

1 Create a copy of PRIMARY database data files on the STANDBY ServerOn PRIMARY DBSQLgtshutdown immediate

On STANDBY Server (While the PRIMARY database is shut down)1) Create directory for data files for example on windows Eoracleproduct1020oradataSTANDBYDATAFILE On UNIX create the directory accordingly

2) Copy the data files and temp files over

3) Create directory (multiplexing) for online logs for example on Windows Eoracleproduct1020oradataSTANDBYONLINELOG and FOracleflash_recovery_areaSTANDBYONLINELOGOn UNIX create the directories accordingly

4) Copy the online logs over

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 71: Manual Database Up Gradation From 9

2 Create a Control File for the STANDBY databaseOn PRIMARY DB create a control file for the STANDBY to useSQLgtstartup mountSQLgtalter database create STANDBY controlfile as lsquoSTANDBYctlSQLgtALTER DATABASE OPEN

3 Copy the PRIMARY DB pfile to STANDBY server and renameedit the file

1) Copy pfilePRIMARYora from PRIMARY server to STANDBY server to database folder on Windows or dbs folder on UNIX under the Oracle home path

2) Rename it to pfileSTANDBYora and modify the file as follows (Here the file paths are from a windows system For UNIX system specify the path accordingly)

audit_file_dest=rsquoEoracleproduct1020adminSTANDBYadumprsquobackground_dump_dest=rsquoEoracleproduct1020adminSTANDBYbdumprsquocore_dump_dest=rsquoEoracleproduct1020adminSTANDBYcdumprsquouser_dump_dest=rsquoEoracleproduct1020adminSTANDBYudumprsquocompatible=rsquo102030primecontrol_files=rsquoEORACLEPRODUCT1020ORADATASTANDBYCONTROLFILESTANDBYCTLrsquoFORACLEFLASH_RECOVERY_AREASTANDBYCONTROLFILESTANDBYCTLrsquodb_name=rsquoPRIMARYrsquodb_unique_name=STANDBYLOG_ARCHIVE_CONFIG=rsquoDG_CONFIG=(PRIMARYSTANDBY)rsquoLOG_ARCHIVE_DEST_1=lsquoLOCATION=FOracleflash_recovery_areaSTANDBYARCHIVELOGVALID_FOR=(ALL_LOGFILESALL_ROLES)DB_UNIQUE_NAME=STANDBYrsquoLOG_ARCHIVE_DEST_2=lsquoSERVICE=PRIMARY LGWR ASYNCVALID_FOR=(ONLINE_LOGFILESPRIMARY_ROLE)DB_UNIQUE_NAME=PRIMARYrsquoLOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLELOG_ARCHIVE_FORMAT=t_s_rarcLOG_ARCHIVE_MAX_PROCESSES=30FAL_SERVER=PRIMARYFAL_CLIENT=STANDBYremote_login_passwordfile=rsquoEXCLUSIVErsquo Specify the location of the PRIMARY DB datafiles followed by the STANDBY locationDB_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARYDATAFILErsquorsquoEoracleproduct1020oradataSTANDBYDATAFILErsquo Specify the location of the PRIMARY DB online redo log files followed by the STANDBY locationLOG_FILE_NAME_CONVERT=rsquoEoracleproduct1020oradataPRIMARY

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 72: Manual Database Up Gradation From 9

ONLINELOGrsquorsquoEoracleproduct1020oradataSTANDBYONLINELOGrsquorsquoFOracleflash_recovery_areaPRIMARYONLINELOGrsquorsquoFOracleflash_recovery_areaSTANDBYONLINELOGrsquoSTANDBY_FILE_MANAGEMENT=AUTO

(Note Not all the parameter entries are listed here)

4 On STANDBY server create all required directories for dump and archived log destinationCreate directories adump bdump cdump udump and archived log destinations for the STANDBY database

5 Copy the STANDBY control file lsquoSTANDBYctlrsquo from PRIMARY to STANDBY destinations

6 Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBYoraOn Windows copy it to database folder and on UNIX copy it to dbs directory And then rename the password file

7 For Windows create a Windows-based service (optional)$oradim ndashNEW ndashSID STANDBY ndashSTARTMODE manual

8 Configure listeners for the PRIMARY and STANDBY databases

1) On PRIMARY system use Oracle Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop$lsnrctl start

2) On STANDBY server use Net Manager to configure a listener for PRIMARY and STANDBY Then restart the listener$lsnrctl stop $lsnrctl start

9 Create Oracle Net service names1) On PRIMARY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

2) On STANDBY system use Oracle Net Manager to create network service names for PRIMARY and STANDBY Check tnsping to both services$tnsping PRIMARY$tnsping STANDBY

10 On STANDBY server setup the environment variables to point to the STANDBY database

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 73: Manual Database Up Gradation From 9

Set up ORACLE_HOME and ORACLE_SID

11 Start up nomount the STANDBY database and generate a spfile- On Windows SQLgtstartup nomount pfile=rsquodatabasepfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodatabasepfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount

- On UNIX SQLgtstartup nomount pfile=rsquodbspfileSTANDBYorarsquoSQLgtcreate spfile from pfile=rsquodbspfileSTANDBYorarsquondash Restart the STANDBY database using the newly created SPFILESQLgtshutdown immediateSQLgtstartup mount(Note- specify your Oracle home path to replace lsquorsquo)

12 Start Redo apply1) On the STANDBY database to start redo applySQLgtalter database recover managed STANDBY database disconnect from session

If you ever need to stop log apply servicesSQLgt alter database recover managed STANDBY database cancel

13 Verify the STANDBY database is performing properly1) On STANDBY perform a querySQLgtselect sequence first_time next_time from v$archived_log

2) On PRIMARY force a logfile switchSQLgtalter system switch logfile

3) On STANDBY verify the archived redo log files were appliedSQLgtselect sequence applied from v$archived_log order by sequence

14 If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived enable the real-time apply

To start real-time applySQLgt alter database recover managed STANDBY database using current logfile disconnect

15 To create multiple STANDBY databases repeat this procedure

IV Maintenance

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g
Page 74: Manual Database Up Gradation From 9

1 Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment

2 Cleanup the archive logs on PRIMARY and STANDBY servers

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY

For the STANDBY database I run RMAN to backup and delete the archive logs once per week $rman target STANDBYRMANgtbackup archivelog all delete input

To delete the archivelog backup files on the STANDBY server I run the following once a monthRMANgtdelete backupset

3 Password managementThe password for the SYS user must be identical on every system for the redo data transmission to succeed If you change the password for SYS on PRIMARY database you will have to update the password file for STANDBY database accordingly otherwise the logs wonrsquot be shipped to the STANDBY server

Refer to section II2 step 2 to updaterecreate password file for the STANDBY Sdatabase

  • Manual Database up gradation from 920 to 1010
  • Duplicate Database With RMAN Without Connecting To Target Database
  • Features introduced in the various Oracle server releases
  • Features introduced in the various server releases
    • Schema Referesh
    • JOB SCHEDULING
    • Steps to switchover standby to primary
    • Encryption with Oracle Data Pump
    • DATAPUMP
    • Clone Database using RMAN
    • step by step standby database configuration in 10g