oracle 11g rac implementation guide

38
Oracle 11gR1 RAC Documentation 1 Configuration Details Configuration Summary Oracle Real Application Clusters (RAC) on Red hat Enterprise Linux 5 Update 2 by using ASM Server Platform HP Proliant BL680C G5 Storage Model HP Storageworks MSA2012fc Storage Array Oracle Software Oracle Database 11g Release 1 (11.1.0.6) for Linux x86 Linux Distribution Red hat Enterprise Linux 5 Update 2 x86 Linux Distribution Details OS RHEL 5 Update 2 x86 Kernel kernel-2.6.18- Additional Packages Needed From Distribution binutils-2.17.50.0.6-5.el5 compat-libstdc++-33-3.2.3-61 compat-libstdc++-33-3.2.3-61 elfutils-libelf-devel-0.125-3.el5 gdb-6.5-25.el5 glibc-2.5-18 glibc-2.5-18 glibc-common-2.5-18 glibc-devel-2.5-18 glibc-devel-2.5-18 libXp-1.0.0-8.1.el5 libXtst-1.0.1-3.1 libaio-0.3.106-3.2 libaio-devel-0.3.106-3.2 libstdc++-4.1.2-14.el5 libstdc++-4.1.2-14.el5 libstdc++-devel-4.1.2-14.el5 make-3.81-1.1 sysstat-7.0.0-3.el5 unixODBC-2.2.11-7.1 unixODBC-devel-2.2.11-7.1 util-linux-2.13-0.44.el5 xorg-x11-deprecated-libs-6.8.2-1.EL.33.0.1

Upload: jatinmeghani

Post on 21-Apr-2015

469 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

1

Configuration Details Configuration Summary Oracle Real Application Clusters (RAC) on Red hat Enterprise

Linux 5 Update 2 by using ASM

Server Platform HP Proliant BL680C G5

Storage Model HP Storageworks MSA2012fc Storage Array

Oracle Software Oracle Database 11g Release 1 (11.1.0.6) for Linux x86

Linux Distribution Red hat Enterprise Linux 5 Update 2 x86

Linux Distribution Details

OS RHEL 5 Update 2 x86

Kernel kernel-2.6.18-

Additional Packages Needed From Distribution binutils-2.17.50.0.6-5.el5

compat-libstdc++-33-3.2.3-61

compat-libstdc++-33-3.2.3-61

elfutils-libelf-devel-0.125-3.el5

gdb-6.5-25.el5

glibc-2.5-18

glibc-2.5-18

glibc-common-2.5-18

glibc-devel-2.5-18

glibc-devel-2.5-18

libXp-1.0.0-8.1.el5

libXtst-1.0.1-3.1

libaio-0.3.106-3.2

libaio-devel-0.3.106-3.2

libstdc++-4.1.2-14.el5

libstdc++-4.1.2-14.el5

libstdc++-devel-4.1.2-14.el5

make-3.81-1.1

sysstat-7.0.0-3.el5

unixODBC-2.2.11-7.1

unixODBC-devel-2.2.11-7.1

util-linux-2.13-0.44.el5

xorg-x11-deprecated-libs-6.8.2-1.EL.33.0.1

Page 2: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

2

Other Packages

HPDMmultipath-4.3.0 RHEL5.2 rpm (This can be downloaded from http://h20000.www2.hp.com)

device-mapper-multipath-0.4.7-12.el5 (This package is obtained from OS distribution)

Filesystem Mount Options Details

ASM None Using Automatic Storage Management Library Driver (asmlib) for datafiles

- - ocr and voting disk located directly on block device , for configuration with datafiles on ASM storage.

Page 3: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

3

Background

The illustration below shows the major components of an Oracle RAC 11g Release 1 configuration. Nodes in the cluster are typically separate servers (hosts).

Hardware At the hardware level, each node in a RAC cluster shares three things:

1. Access to shared disk storage

2. Connection to a private network

3. Access to a public network.

4. Shared Disk Storage Oracle RAC relies on shared disk architecture. The database files, online redo logs, and control files for the database must be accessible to each node in the cluster. The shared disks also store the Oracle Cluster Registry and Voting Disk (discussed later). There are a variety of ways to configure shared storage including direct attached disks (typically SCSI over copper or fiber), Storage Area Networks (SAN), and Network Attached Storage (NAS).

5. Private Network Each cluster node is connected to all other nodes via a private high-speed network, also known as the cluster interconnect or high-speed interconnect (HSI). Oracle Cache Fusion allows data stored in the cache of one Oracle instance to be accessed by any other instance by transferring it across the private network. It also preserves data integrity and cache coherency by transmitting locking and other synchronization information across cluster nodes.

6. Public Network To maintain high availability, each cluster node is assigned a virtual IP address (VIP). In the event of node failure, the failed node's IP address can be reassigned to a surviving node to allow applications to continue accessing the database through the same IP address.

Page 4: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

4

Part I: Installing Linux

Install and Configure Linux. We need three IP addresses for each server: one for the private network, one for the public network, and one for the virtual IP address. Use the operating system's network configuration tools to assign the private and public network addresses.

Server Model 2 X HP BL680C G5 Server series

Processors 2 X Intel(R) Xeon(TM) CPU 3.0 GHz

Memory 24GB RAM

OnBoardStorage 140 GB Ultra320 SCSI 15k rpm

Network/Interconnect Dual port Broadcom NetXtreme II BCM5708 Gigabit Ethernet

HBA 2 X QLogic QMH 2462 4 Gbps Mezzanine for HP Blade Servers

Multipath Device mapper multipath version 0.4.7

Storage Model MSA2012fc Storage Array

Storage Details 12 X 300GB 15K rpm FC Drives (RAID 10)

Configure Name Resolution

DNS is configured to resolve the following. However, local /etc/hosts is also configured on all nodes.

# vi /etc/hosts

127.0.0.1 localhost.localdomain localhost 192.168.11.5 rac1.asl.net rac1 192.168.11.6 rac2.ascl.net rac2 192.168.100.5 rac1-priv.asl.net rac1-priv 192.168.100.6 rac2-priv.asl.net rac2.priv 192.168.10.5 rac1-vip.asl.net rac1-vip 192.168.10.6 rac2-vip.asl.net rac2-vip Save and exit # service network restart

Configure NTP on all nodes so their time is same up to the second.

Configure NTP on RAC1 node: # vi /etc/ntp.conf ##### for server use this and on clients comment this and use server serverIP ################## server 127.127.1.0 # local clock Save and exit # service ntpd restart # chkconfig ntpd on Configure NTP on RAC2 node: # vi /etc/ntp.conf

# Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). server 192.168.11.5 ### add this line on second server ###

Save and exit # service ntpd restart # chkconfig ntpd on # ntpdate –u 192.168.11.5

Page 5: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

5

Shut down un-necessary services on all nodes:

# chkconfig --level 35 sendmail off # chkconfig --level 35 pcmcia off # chkconfig --level 35 cups off # chkconfig --level 35 hpoj off Part II: Configure Linux for Oracle Create the Oracle Groups and User Account: Create the Linux groups and user account that will be used to install and maintain the Oracle 11g Release 1 software. The user account will be called 'oracle' and the groups will be 'oinstall' and 'dba.' Execute the following commands as root on one cluster node only: The User ID and Group IDs must be the same on all cluster nodes. Using the information from the id oracle command, create the Oracle Groups and User Account on the remaining cluster nodes: # groupadd -g 501 oinstall # groupadd -g 502 dba # useradd -m -u 501 -g oinstall -G dba oracle # id oracle uid=501(oracle) gid=501(oinstall) groups=501(oinstall),502(dba) Set the password on the oracle account: # passwd oracle Changing password for user oracle. New password: Retype new password: passwd: all authentication tokens updated successfully. Configure Kernel Parameters: Login as root and configure the Linux kernel parameters on each node.

File Name Conf File Settings Comments /etc/sysctl.conf kernel.msgmni=2878

kernel.shmmax=4185235456 #Set to a value half the size of physical memory

net.ipv4.ip_local_port_range=1024 65000 net.core.wmem_max=262144 net.core.wmem_default=262144

net.core.rmem_max=4194304 # rmem_max can be tuned based on workload to balance performance vs lowmem usage

net.core.rmem_default=262144 # rmem_default can be tuned based on workload to balance performance vs lowmem usage

kernel.sysrq=1 kernel.shmall=3279547 kernel.shmmni=4096 kernel.sem=250 32000 100 142 kernel.msgmnb = 65536 fs.file-max=327679 fs.aio-max-nr=3145728 kernel.msgmax=8192 /etc/sysconfig/oracleasmORACLEASM_SCANORDER=dm

/etc/security/limits.conf oracle soft nofile 131072 # depending on size of db, these may need to be larger

oracle hard nproc 131072 oracle hard core unlimited oracle soft nproc 131072

Page 6: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

6

oracle hard nofile 131072 oracle hard memlock 50000000

oracle soft memlock 50000000 # set memlock greater than or equal to the sga size to allow oracle to use hugepages if configured

oracle soft core unlimited /etc/multipath.conf device {vendor "HP" product "MSA2[02]*"} defaults {polling_interval 10 no_path_retry 12}

/etc/modprobe.conf options hangcheck_timer hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1

#These hangcheck timer settings is for the default css misscount value and needs to be tuned based on the css misscount value & oracle RAC version

# vi /etc/sysctl.conf kernel.shmall=3279547 kernel.shmmax=4185235456 kernel.shmmni = 4096 kernel.sem=250 32000 100 142 fs.file-max=327679 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 262144 net.core.wmem_default = 262144 net.core.rmem_max = 1048536 net.core.wmem_max = 1048536 save and exit # sysctl -p Setting Shell Limits for the oracle User: Oracle recommends setting the limits to the number of processes and number of open files each Linux account may use. To make these changes as root. # vi /etc/security/limits.conf oracle soft nproc 131072 oracle hard nproc 131072 oracle soft nofile 131072 oracle hard nofile 131072 oracle hard core unlimited oracle hard memlock 50000000 oracle soft memlock 50000000 save and exit # vi /etc/pam.d/login session required /lib/security/pam_limits.so session required pam_limits.so save and exit Configure the Hangcheck Timer: All RHEL releases:

# vi /etc/rc.d/rc.local modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 save and exit

Page 7: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

7

Setup user equivalence on all RAC nodes: It is highly important for the oracle user to be able to login to the same node from which ssh session is started as well as logon to all other nodes of the RAC, without being asked for password.

Two node setup: Log on as user ORACLE on all RAC nodes. Do not give pass phrase to any of the methods below.

On Node rac1: # su – oracle $ chmod 700 ~/.ssh $ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_dsa):

Created directory '/home/oracle/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_dsa.

Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.

The key fingerprint is:

6f:d5:81:eb:25:26:51:cd:53:45:2b:af:01:51:e3:b1 [email protected]

$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_rsa.

Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.

The key fingerprint is:

d6:86:09:f5:2e:ec:dc:35:c8:d8:7c:22:f9:80:3d:c3 [email protected]

$ cd ~/.ssh/

$ cat id_rsa.pub >> authorized_keys

$ cat id_dsa.pub >> authorized_keys

$ cat id_rsa.pub >> /tmp/authorized_keys.tmp

$ cat id_dsa.pub >> /tmp/authorized_keys.tmp

On Node rac2: # su – oracle $ chmod 700 ~/.ssh $ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_dsa):

Created directory '/home/oracle/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_dsa.

Page 8: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

8

Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.

The key fingerprint is:

6f:d5:81:eb:25:26:51:cd:53:45:2b:af:01:51:e3:b1 [email protected]

$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_rsa.

Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.

The key fingerprint is:

d6:86:09:f5:2e:ec:dc:35:c8:d8:7c:22:f9:80:3d:c3 [email protected]

$ cd ~/.ssh/

$ cat id_rsa.pub >> authorized_keys

$ cat id_dsa.pub >> authorized_keys

$ cat id_rsa.pub >> /tmp/authorized_keys.tmp

$ cat id_dsa.pub >> /tmp/authorized_keys.tmp On Node rac1 $ scp rac2:/tmp/authorized_keys.tmp .

authorized_keys.tmp 100% 852 0.8KB/s 00:00

Then:

$ cat authorized_keys.tmp >> authorized_keys

$ chmod 644 authorized_keys On Node rac2 $ scp rac1:/tmp/authorized_keys.tmp .

authorized_keys.tmp 100% 852 0.8KB/s 00:00

Then:

$ cat authorized_keys.tmp >> authorized_keys

$ chmod 644 authorized_keys

Now get fingerprints of all possible interfaces / nodes of this RAC setup using ssh. You need to exit after each successful log-on to avoid confusion.

On Node rac1: $ ssh rac1.asl.net $ ssh rac1-priv.asl.net $ ssh rac2.asl.net $ ssh rac2-priv.asl.net $ ssh rac1-vip.asl.net $ ssh rac2.vip.asl.net $ ssh rac1 $ ssh rac2 $ ssh rac1-priv $ ssh rac2-priv $ ssh rac1-vip $ ssh rac2-vip

Page 9: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

9

On Node rac2: $ ssh rac1.asl.net $ ssh rac1-priv.asl.net $ ssh rac2.asl.net $ ssh rac2-priv.asl.net $ ssh rac1-vip.asl.net $ ssh rac2.vip.asl.net $ ssh rac1 $ ssh rac2 $ ssh rac1-priv $ ssh rac2-priv $ ssh rac1-vip $ ssh rac2-vip Multipath: multipath is used to detect multiple paths to devices for fail-over or performance and redundancy. Installing HPDM Multipath Tools on both nodes:

1. Log in as root to the host system. 2. Copy the installation tar package to a temporary directory (for example, /tmp/HPDMmultipath). 3. To unbundle the package, enter the following commands:

# cd /tmp/HPDMmultipath # tar -xvzf HPDMmultipath-4.3.0.tar.gz # cd HPDMmultipath-4.3.0

4. Verify that the directory contains the INSTALL.sh shell script, the SRPMS , README.txt, and thedocs directories.

5. To install or upgrade HPDM Multipath tools software on the server, enter the following command: # ./INSTALL.sh

6. Follow the on-screen instructions to complete the installation. Configuring QLogic HBA Parameters: To configure the QLogic HBA parameter, complete the following steps:

1. For QLogic 2xxx family of HBAs, edit the /etc/modprobe.conf file in RHEL5 hosts with the following values: # vi /etc/modprobe.conf options qla2xxx qlport_down_retry=10 ql2xfailover=0 save and exit

2. Rebuild the initrd by executing the following script: # /opt/hp/src/hp_qla2x00src/make_initrd

3. Reboot the host. Part III: Prepare the Shared Disks Both Oracle Clusterware and Oracle RAC requires access to disks that are shared by each node in the cluster. HP Storage is configured with RAID 1+0 and assigned 2 volumes for ocr, 3 volumes for voting disk and 3 volumes for ASM are mapped to both nodes i.e., rac1 and rac2, after run the scsi_id command queries a SCSI device and also generate an id with option -g and -s. Log in as root on both nodes On node rac1: # hp_rescan - a ###### command is used to find and add LUN # scsi_id -g -s /block/sda

3600a0b80001327510000009b4362163e

On node rac2: # hp_rescan -a ####### command is used to find and add LUN # scsi_id -g -s /block/sda

3600a0b80001327510000009b4362163e

Or You can user the command

Page 10: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

10

# multipath -ll ###### instead of scsi_id##### You will the output of below like screen

create: 3600a0b80001327d80000006d43621677 [size=12 GB][features="0"][hwhandler="0"] \\_ round-robin 0 \\_ 2:0:0:0 sdb 8:16 \\_ 3:0:0:0 sdf 8:80 create: 3600a0b80001327510000009a436215ec [size=12 GB][features="0"][hwhandler="0"] \\_ round-robin 0 \\_ 2:0:0:1 sdc 8:32 \\_ 3:0:0:1 sdg 8:96 create: 3600a0b80001327d800000070436216b3 [size=12 GB][features="0"][hwhandler="0"] \\_ round-robin 0 \\_ 2:0:0:2 sdd 8:48 \\_ 3:0:0:2 sdh 8:112 create: 3600a0b80001327510000009b4362163e [size=12 GB][features="0"][hwhandler="0"] \\_ round-robin 0 \\_ 2:0:0:3 sde 8:64 \\_ 3:0:0:3 sdi 8:128

After getting output of wwid nos are to be bind with alias in the configuration file /etc/multipath.conf On Node rac1: # vi /etc/multipath.conf multipaths {

multipath { wwid 3600a00b80001327510000009b4362163e ### copy the above scsi_id output id here##### alias asm1 } multipath { wwid 3600a00b80001327510000009b4362153e alias asm2 } multipath { wwid 3600a00b80001327510000009b4362133e alias asm3 } multipath { wwid 3600a00b80001327510000009b4362143e alias ocr } multipath { wwid 3600a00b80001327510000009b4362163e alias ocrmirror } multipath { wwid 3600a00b80001327510000009b4362163e alias voting } multipath { wwid 3600a00b80001327510000009b4362163e alias votingmirror

Page 11: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

11

} multipath { wwid 3600a00b80001327510000009b4362163e alias votingmirror2 } } Save and exit

# vi /etc/rc.local ######## Change the ownership of ocr, voting disk and ASM###### chown root:oinstall /dev/mapper/ocr* chown oracle:oinstall /dev/mapper/voting* chown oracle:oinstall /dev/mapper/asm* ########Change the permission of ocr, voting and ASM####### chmod 640 /dev/mapper/ocr* chmod 660 /dev/mapper/voting* chmod 640 /dev/mapper/asm* save and exit # scp /etc/multipath.conf rac2:/etc # scp /etc/rc.local rac2:/etc # service multipathd restart # ll /dev/mapper/ Displays the alias name which are bind information contained in the file /etc/multipath.conf On Node rac2: # service multipathd restart # ll /dev/mapper/ Displays the alias name which are bind information contained in the file /etc/multipath.conf Partition the Disks: On Node rac1: Run the fdisk command to create the partition for ASM only # fdisk /dev/mapper/asm1 # fdisk /dev/mapper/asm2 # fdisk /dev/mapper/asm3 # partprobe Run "kpartx -a" after FDISK is completed to add all partition mappings on the newly-created multipath device # kpartx -a /dev/mapper/asm1 # kpartx -a /dev/mapper/asm2 # kpartx -a /dev/mapper/asm3 # kpartx -l /dev/mapper/asm1 # kpartx -l /dev/mapper/asm2 # kpartx -l /dev/mapper/asm3 # ls /dev/mapper/ # /etc/rc.local #####Run this command for ownership and permission##### On Node rac2: # partprobe

Page 12: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

12

Run "kpartx -a" after FDISK is completed to add all partition mappings on the newly-created multipath device # kpartx -a /dev/mapper/asm1 # kpartx -a /dev/mapper/asm2 # kpartx -a /dev/mapper/asm3 # kpartx -l /dev/mapper/asm1 # kpartx -l /dev/mapper/asm2 # kpartx -l /dev/mapper/asm3 # ls /dev/mapper/ # /etc/rc.local #####Run this command for ownership and permission##### Installation of ASM on both nodes ASMLib 2.0 is delivered as a set of three Linux packages:

• oracleasmlib-2.0 - the ASM libraries • oracleasm-support-2.0 - utilities needed to administer ASMLib • oracleasm - a kernel module for the ASM library

First, determine which kernel you are using by logging in as root and running the following command:

# uname –rm Download the kernel verion related oracleasm from the link http://www.oracle.com/technology/tech/linux/asmlib/index.html # rpm -ivh oracleasm-2.6.18-53.1.14.el5-2.0.4-1.el5 oracleasm-support-2.0.4-1.el5 oracleasmlib-2.0.3-1.el5.x86_64.rpm

Configuring ASMLib on both nodes # /etc/init.d/oracleasm configure Default user to own the driver interface []: oracle Default group to own the driver interface []: oinstall Start Oracle ASM library driver on boot (y/n) [n]: y Fix permissions of Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: [ OK ] Creating /dev/oracleasm mount point: [ OK ] Loading module "oracleasm": [ OK ] Mounting ASMlib driver filesystem: [ OK ] Scanning system for ASM disks: [ OK ]

Note: Mark disks for use by ASMLib by running the following command as root only on Node rac1 Tip: Enter the DISK_NAME in UPPERCASE letters. # /etc/init.d/oracleasm createdisk VOL1 /dev/mapper/asm1p1 Marking disk "/dev/mapper/asm1p1" as an ASM disk: [ OK ] # /etc/init.d/oracleasm createdisk VOL2 /dev/mapper/asm2p1 Marking disk "/dev/mapper/asm2p1" as an ASM disk: [ OK ] # /etc/init.d/oracleasm createdisk VOL3 /dev/mapper/asm3p1 Marking disk "/dev/mapper/asm3p1" as an ASM disk: [ OK ] # /etc/init.d/oracleasm listdisks VOL1 VOL2 VOL3 On node rac2: Run the following command as root to scan for configured ASMLib disks: # /etc/init.d/oracleasm scandisks

Page 13: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

13

Install the Clusterware software Copy and Unzip the Clusterware at location /stage1 and check the prerequisites have been met using the “runcluvfy.sh” utility in the Clusterware. On Node rac1: # cd /stage1 # cp -r /media/clusterware linux.x64_11gR1_clusterware.zip /stage1 # unzip clusterware linux.x64_11gR1_clusterware.zip # xhost + # su – oracle $ cd /home/oracle $ vi . crs.env

export ORACLE_BASE=/node1 export ORACLE_HOME=/node1/crs

save and exit $ . crs.env $ cd /stage1/clusterware $ ./runcluvfy.sh stage -pre crsinst –n rac1,rac2 –verbose $ ./runInstaller

Page 14: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

14

Page 15: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

15

Page 16: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

16

Page 17: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

17

On Node rac1: Open a new terminal window Run the script # cd /home/oracle/oraInventory/ # ls -la # ./orainstRoot.sh On Node rac2: Open a new terminal window Run the script # cd /home/oracle/oraInventory/ # ls -la # ./orainstRoot.sh Then On Node rac1: Open a new terminal window Run the script # cd /node1/crs # ls -la # ./root.sh On Node rac2: Open a new terminal window Run the script # cd /node1/crs # ls -la # ./root.sh On Node Rac1: Press OK

Page 18: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

18

Wait for the configuration assistants to complete.

Then select exit.

Page 19: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

19

Then run the environment variables is located in file /home/oracle/crs.env $ . crs.env $ crs_stat -t ###### To check the status of Clusterware ######## Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora.rac1.gsd application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1 ora.rac1.vip application 0/0 0/0 ONLINE ONLINE rac1 ora.rac2.gsd application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2 ora.rac2.vip application 0/0 0/0 ONLINE ONLINE rac2 $ exit Oracle Database software Installation: Copy and unzip the Oracle 11g database at location /stage1. On Node rac1: # cd /stage1 # cp -r /media/linux.x64_11gR1_database.zip /stage1 # unzip linux.x64_11gR1_database.zip # xhost + # su – oracle $ cd /home/orace $ vi . asm.env

export ORACLE_BASE=/node1 export ORACLE_HOME=/node1/asm save and exit

$ . asm.env $ cd /stage1/database $ ./runInstaller

Page 20: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

20

Select the "Enterprise Edition" option and click the "Next" button.

Choose /node1 for Oracle base, leave Name and Path in Software Location at /node1/asm.

Page 21: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

21

Select Cluster Installation and enable rac1(already marked) and rac2 nodes.

Installer should verify your environment. In your configuration you probably do not have enough swap space, but this hasn't caused any problems, so you can safely “user-verify” this. Also you should ignore the kernel rmem_default parameter notice (it's also OK).

Page 22: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

22

At the Privileged Operating Systems Groups prompt ensure you have selected the “dba” group everywhere. (In serious deployments these groups should be carefully separated for security reasons.)

Select Install database software only as you want to create the database later.

Page 23: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

23

Double-check everything and then click Install.

Installation process will occur. Taking into account that iSCSI and storage optimizations haven't been done yet, it can take up to one hour depending on your hardware.

After installation you will be asked to run post-installation scripts on both nodes.

Page 24: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

24

Execute the mentioned script on all the nodes.

Page 25: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

25

Click Exit DBCA: Creation of Database and ASM instance Creating an ASM Instance and a Disk Group with DBCA To create an ASM instance and a disk group with DBCA, perform the following steps:

DBCA starts its GUI interface.

# xhost + # su – oracle $ . db.env $ cd /node1/asm/product/11.1.0/db_1/bin/ $ ./dbca

Select Configure Automatic Storage Management and click Next.

Page 26: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

26

Page 27: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

27

Page 28: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

28

Click "Finish" to exit out from dbca. Verify that LISTENER and ASM instances are up and running and are properly registered with CRS. CRS STACK STATUS AFTER THE INSTALLATION AND CONFIGURATION OF ASM

Page 29: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

29

================================================================ $ crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora....SM1.asm application ONLINE ONLINE node1-pub ora....UB.lsnr application ONLINE ONLINE node1-pub ora....pub.gsd application ONLINE ONLINE node1-pub ora....pub.ons application ONLINE ONLINE node1-pub ora....pub.vip application ONLINE ONLINE node1-pub ora....SM2.asm application ONLINE ONLINE node2-pub ora....UB.lsnr application ONLINE ONLINE node2-pub ora....pub.gsd application ONLINE ONLINE node2-pub ora....pub.ons application ONLINE ONLINE node2-pub ora....pub.vip application ONLINE ONLINE node2-pub A Database Configuration Assistant: Warning window informs you of your next steps.

Select Create a Database.

Page 30: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

30

Ensure that both nodes are selected.

Page 31: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

31

Choose Custom Database to have better flexibility in the database creation process.

Name your database. For the purpose of this guide call it as “erac.world”. The SID prefix should automatically be set to “erac”. Individual SID are going to be “rac1” .. “rac2”.

Page 32: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

32

Decide whether you want to install Oracle Enterprise Manager. If yes, configure it appropriately.

For testing purposes you can select some easy password for all important Oracle accounts.

Page 33: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

33

As you want to build an Extended RAC configuration, you must choose ASM as your storage option.

You will use a non-shared PFILE for every ASM instance. You could choose to use a shared SPFILE to have centralized ASM configuration, but then a problem arises: At which storage array should you store this critical file?

Page 34: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

34

At the Create Diskgroup Window click on Change Disk Discovery Path; a new window should pop up.

At Initialization Parameters you can configure how much memory (total) will be used by this instance. This can be later alter by changing the memory_target parameter. Customize other database parameters to meet yours needs.

Page 35: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

35

There are many new security options in Oracle Database 11g (which are on by default). Accept them all.

Ensure you have selected Enable automatic maintenance tasks.

Page 36: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

36

At Database Storage you can tune parameters related to REDO logs, controlfiles, and so on. Defaults are appropriate for initial testing.

Page 37: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

37

Select only Create Database; if you'd like you can generate database scripts to speed up creation of a new DB after wiping an old one (e.g. for new experiments).

You will be presented with summary showing which options are going to be installed. The database creation process can be somehow long depending on the options being installed and the storage used. Finally, you will get a quick summary about the created database.

Page 38: Oracle 11g RAC Implementation Guide

Oracle 11gR1 RAC Documentation

38

After startup the database will be available through two instances {rac1, rac2}.