erez alsheich - gridcontrol
TRANSCRIPT
<Insert Picture Here>
RAC, ASM and Linux Forum, October 12, 2010
Erez Alsheich C.E.O
Managing Your RAC with
OEM – Grid Control
• Leading database service provider
• Merged with on July 2010
• All major databases
• Gold Partners
• OEM – Grid Control expertise and leadership
• Currently looking for talented DBAs: [email protected]
- Introduction
Some of our (300+) Customers
About OEM – Grid Control
Agenda
RAC Administration
RAC Monitoring
RAC Performance Diagnostics & Tuning
RAC Configuration Management
About OEM – Grid Control
OEM & RAC
• Entire stack: cluster, hosts, database,
database instances,, ASM instances,
listeners
• All aspects: Administration,
Monitoring,
Performance diagnostics
Performance tuning
Configuration management
RAC Administration
Cluster Administration
Manage Resources
Resource Information
Resource Advanced Settings
Manage Resource Types
Cluster - Interconnects
Topology & Status
Dashboards – Cluster Database
Dashboards – Host
Dashboards – ASM
RAC Monitoring
Monitoring Highlights
• OOTB metrics for all target types
• OOTB thresholds and sample frequencies
• Over-Time graphs for each metric!!
• Automatic Corrective Actions
• User Defined Monitoring metrics
• Built-in Email notification
• SNMP traps to Manager-of-Managers (MoM)
• Built-in connectivity with leading MoM solutions
Database OOTB Metrics
Host OOTB Metrics
Cluster OOTB Metrics
ASM OOTB Metrics
Metric Data over Time
- Diagnostics
- Trend Analysis
- Capacity
Planning
- Comparison
Host Performance Dashboard
ASM Performance Dashboard
Database Performance Dashboard
ADDM – Automatic Diagnostics
Automatic Tuning Advisor
RAC Configuration Management
Compare Configurations
Comparison Results - Summary
Comparison Results - HW
Comparison Report - OS
Change Tracking
Thanks
<Insert Picture Here>
RAC, ASM and Linux Forum, October 12, 2010
Avi Apelbaum DBA & System engineer
Real Life Experience with
RAC 11GR2
Technique 1 : Creating a new cluster
Step 1 : If your db is 10.2.0.1 or below so first
upgrade it to 10.2.0.4
Step 2 : Taking (of course) a full backup of the db
(rman or storage snapshot).
The following steps are if you are using
the same machine:
Step 3 : Backup spfile (if not in asm)/init.ora
Step 4 : Take notes of the current services
(prefered nodes,TAF policies,etc…)
Step 5 : Uninstall rdbms software (both ASM and
DB if separated) and cluster software.
Step 6 : Uninstall clusterware and cleanup the
machine (use metalink:239998.1)
Step 7 : Install 11gR2 Grid Infrastructure
Step 8 : Install 10.2.0.1 rdbms software and
upgrade it to 10.2.0.4 (or the version of
your DB).
Technique 1 : Creating a new cluster
Step 9 : Copy the backed up spfile/init.ora to it‟s new
place.
Step 10 : Add the DB to the new cluster by using
“srvctl add database” and add then add
instances by using “srvctl add instance”
Step 11 : Add services by using srvctl add service.
Technique 1 : Creating a new cluster
If you choose to do it on a new machine
you have 3 possibilities:
• After shutdown the DB, unmap LUNs from old
machines and map them to the new machine
(has to be same OS).
If using linux run the command oracleasm
scandisks as root user and then oracleasm
listdisks. In other case you can use the following
command “kfod disks=all dscvgroup=TRUE”
Technique 1 : Creating a new cluster
• Export the data and then import it into a newly
created database.
• Using Transportable database to move it to a new
machine. In this case the DB can me moved
between platforms (look at oracle documentation
for limitations).
Technique 1 : Creating a new cluster
This technique is well documented in oracle but I‟ve
choose to build a new one for the following
reasons/issues:
• When beginning the upgrade we had only 1 votedisk.
After running rootUpgrade.sh on the first node this
node changed/upgraded the only votedisk available
and the second node upgrade (of course) failed.
Technique 2 :
Upgrading the existing cluster
Technique 2 :
Upgrading the existing cluster
• After a second retry, which succeeded, at the final
step we‟ve made a restart to the cluster but it failed to
start because for some unknown reason the
interconnect and public interface configuration were
changed in such a way the cluster was not able to
start anymore and it was unable to get to a state were
the reconfiguration was possible (using oifcfg ).
Moving ASM to Extended RAC
Extended ASM is actually a diskgroup in normal or high redundancy in
which each Failure Group is on a separate storage machines
in different locations.
I used the following main steps to migrate our 11gR2 asm
to extended RAC:
• Step 1 : Map new volumes from both storage
machines to all the cluster nodes. The same number
and size of volume should be use in both storages.
• Step 2 :Create a new Diskgroup/s with normal
redundancy when each failgroup is on a different
storage.
Moving ASM to Extended RAC
• Step 2a: Create a normal redundancy diskgroup with
at least 3 disks for the votedisks and OCR.
• Step 3 Move votedisks to new DG (“crsctl replace
votedisk +<NEW DG NAME>”)
• Step 4 Move ocr disks (ocrconfig)
• Step 5 Move controlfiles to new DG‟s
Moving ASM to Extended RAC
• Step 6 When DB is in mount state copy datafiles to
new DG by using the command: “backup as copy
database format +<NEW DATA DG”
• Step 7 After succsessfully competion of the copy
perform the following to update control file with the
copied datafiles :”switch database to copy” and then
“alter database open”
Moving ASM to extended RAC
• Step 8 : Create a new temp TBS or add a new file to
the current one and then delete the old file from that
temp TBS(alter database tempfile „path to file‟ drop;)
• DONE.
Moving ASM to extended RAC