oracle rac 11g r1 on hp-ux

46
Oracle Database 11g Release 1 Real Application Clusters on HP-UX Installation Cookbook This document is based on our experiences, it is not an official HP/Oracle documentation. We're constantly updating this installation cookbook, therefore please check for the latest version of this cookbook on our HP/Oracle CTC web page at https://www.hporaclectc.com/cug/assets/11gRAChp.htm (pdf version). If you have any comments or suggestions, please send us an email with your feedback! In case of issues during your installation, please also report this problem to HP and/or Oracle support. Contents: 1. Aim of this document This document is intended to provide help installing Oracle Real Application Clusters 11g Release 1 on HP servers running HP-UX operating system. This paper covers both Integrity and PA-RISC platform, HP-UX 11.31 as well as 11.23. All information here is based on practical experiences, and should be used in conjunction with the official Authors: Rebecca Schlecht (HP), Rainer Marekwia (Oracle) EMEA HP/Oracle Cooperative Technology Center (CTC) http://www.hporaclectc.com Date: 2nd May 2008 1. Aim of this document 2. Oracle RAC Components Overview 3. Supported Configurations with RAC11g on HP - UX 4. General System Installation Requirements 4.1 Hardware Requirements 4.2 Network Requirements 4.3 Required HP - UX Patches 4.4 Kernel Parameter Settings 5. Create the Oracle User 6. Oracle RAC 11g Cluster Preparation Steps 6.1 RAC 11g with ASM over SLVM 6.2 RAC 11g with RAW over SLVM 6.3 RAC 11g with ASM over RAW (with or without SG/SGeRAC) 7. Preparation for Oracle Software Installation 7.1 Prepare HP - UX Systems for Oracle software installation 7.2 Check Cluster Configuration with Cluster Verification Utility 8. Install Oracle Clusterware 9. Installation & Creation of Oracle Database RAC1 1g 10. Implementation of SG Packages Framework for RAC 11. Tips & Tricks 12. Known Issues & Bug Fixes Page 1 of 46 HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Upload: nguyen-quoc-huy

Post on 12-Nov-2014

8.848 views

Category:

Documents


9 download

TRANSCRIPT

Page 1: Oracle RAC 11g R1 On HP-UX

Oracle Database 11g Release 1Real Application Clusters

on HP-UX Installation Cookbook

This document is based on our experiences, it is not an official HP/Oracle documentation. We're constantly updating this installation cookbook, therefore please check for the latest version of this cookbook on our HP/Oracle CTC web page at https://www.hporaclectc.com/cug/assets/11gRAChp.htm (pdf version).

If you have any comments or suggestions, please send us an email with your feedback! In case of issues during your installation, please also report this problem to HP and/or Oracle support.

Contents:

1. Aim of this document

This document is intended to provide help installing Oracle Real Application Clusters 11g Release 1 on HP servers running HP-UX operating system. This paper covers both Integrity and PA-RISC platform, HP-UX 11.31 as well as 11.23. All information here is based on practical experiences, and should be used in conjunction with the official

Authors: Rebecca Schlecht (HP), Rainer Marekwia (Oracle)EMEA HP/Oracle Cooperative Technology Center (CTC)

http://www.hporaclectc.comDate: 2nd May 2008

1. Aim of this document2. Oracle RAC Components Overview3. Supported Configurations with RAC11g on HP-UX4. General System Installation Requirements

4.1 Hardware Requirements 4.2 Network Requirements 4.3 Required HP-UX Patches 4.4 Kernel Parameter Settings

5. Create the Oracle User6. Oracle RAC 11g Cluster Preparation Steps 6.1 RAC 11g with ASM over SLVM 6.2 RAC 11g with RAW over SLVM

6.3 RAC 11g with ASM over RAW (with or without SG/SGeRAC)

7. Preparation for Oracle Software Installation

7.1 Prepare HP-UX Systems for Oracle software installation

7.2 Check Cluster Configuration with Cluster Verification Utility

8. Install Oracle Clusterware9. Installation & Creation of Oracle Database RAC11g10. Implementation of SG Packages Framework for RAC11. Tips & Tricks12. Known Issues & Bug Fixes

Page 1 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 2: Oracle RAC 11g R1 On HP-UX

Oracle on HP-UX installation guides:http://www.oracle.com/pls/db111/portal.portal_db?selected=11&frame=#hp-ux_installation_guides

This cookbook does also include material from HP Serviceguard white papers written by HP's Availability Clusters Solutions Labs (ACSL) which are available externally at http://docs.hp.com/en/ha.html (click on Serviceguard Extension for RAC), and HP internally at http://haweb.cup.hp.com/ATC/Web/Whitepapers/default.htm. Within this cookbook, all scenarios are based on a 2 node cluster - node1 referred to as 'ksc' and node2 as 'schalke'.

In this paper, we use the following logic:

ksc# <command> = command needs to be issued as root from node kscschalke$ <command> = command needs to be issued as oracle from node schalkeksc/schalke# <command> = command needs to be issued as root from both nodes ksc + schalke and so on. 2. Oracle RAC Components Overview Oracle Clusterware

Starting with RAC 10g, Oracle includes its own Clusterware and package management solution with the database product. The Oracle Clusterware consists of

l Oracle Cluster Synchronization Services (CSS) to provide cluster management functionality

l Oracle Cluster Ready Services (CRS) support services and workload management and help to maintain the continuous availability of the services. CRS also manages resources such as the virtual IP (VIP) address for the node and the global services daemon.

l Event Management (EVM) publishes events generated by CRS

This Oracle Clusterware is available on all various Oracle RAC platforms and based on the HP TruCluster product which Oracle licensed a couple of years ago. Customers can now deploy Oracle RAC clusters without any additional 3rd party clusterware products such as SG/SGeRAC. However, customers might want to continue to use SG/SGeRAC for the cluster management (e.g. to make your complete cluster high available

Page 2 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 3: Oracle RAC 11g R1 On HP-UX

including 3rd party application, interconnect, etc.). In this case Oracle Clusterware interacts with the SG/SGeRAC to coordinate cluster membership information.

Oracle Automatic Storage Management

Oracle Automatic Storage Management (ASM) is a feature introduced in Oracle Database 10g to simplify the storage of Oracle data. ASM virtualizes the database storage into disk groups. The DBA is able to manage a small set of disk groups and ASM automates the placement of the database files within those disk groups. In summary ASM does provide the following functionality:

l Manages groups of disks, called disk groups. l Provides three mirroring options for protection against disk failure: none, two-way, and

three-way mirroring. l Spreads data evenly across all available storage resources to optimize performance

and utilization. l Enables the DBA to change the storage configuration without having to take the

database offline. l Automatically rebalances files across the disk group after disks have been added or

dropped.

Oracle ASM is only one implementation choice. For the complete picture, see next chapter.

3. Supported Configurations with RAC11gR1 on HP-UX Customers do have a variety of choices with regards to the installation and set-up of Oracle Real Application Clusters 11g R1 on the HP-UX platform. The figure below illustrates the supported configurations with Oracle RAC11g on HP-UX.

First customers need to make a decision with regards to the underlying cluster software. Customers have the possibility to deploy their RAC cluster only with Oracle Clusterware. Alternatively, customers might want to continue to use HP Serviceguard & HP Serviceguard Extension for RAC (SGeRAC) for the cluster management. In this case Oracle’s CSS interacts with HP SG/SGeRAC to coordinate cluster

Page 3 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 4: Oracle RAC 11g R1 On HP-UX

membership information. For storage management, customers have the choice to use Oracle ASM or RAW Devices. HP's Cluster File System for RAC has not been yet certified for RAC11g. Please note, for RAC with Standard Edition installations, Oracle mandates that the Oracle data must be placed under ASM control. The following table shows the storage options supported for storing Oracle Clusterware files, Oracle database files, and Oracle database recovery files. Oracle database files include data files, control files, redo log files, the server parameter file, and the password file. Oracle Clusterware files include Oracle Cluster Registry (OCR) and Voting disk. Oracle Recovery files include archive log files.

*: HP SG CFS for RAC is not yet available for Oracle RAC 11g.

4. General System Installation Requirements

4.1 Hardware Requirements

l at least 1GB of physical RAM. Use the following command to verify the amount of memory installed on your system: # /usr/contrib/bin/machinfo | grep -i Memory or # /usr/sbin/dmesg | grep "Physical:"

l Swap space equivalent to the multiple of the available RAM, as indicated here:¡ If RAM up to 2GB, then swap space recommended is 2 times the size of RAM¡ If RAM between 2GB and 8GB, then swap space recommended is equal to the size of RAM¡ If RAM > 8GB, then swap space recommended is 0.75 times the size of RAM.

Use the following command to determine the amount of swap space installed on your system:# /usr/sbin/swapinfo -a

l 400 MB of disk space in the /tmp directory. To determine the amount of disk space available in the /tmp directory, enter the following command:# bdf /tmpIf there is less than 400 MB of disk space available in the /tmp directory extend the file system or set the TEMP and TMPDIR environment variables when setting the oracle user's environment. This environment variables can be used to override /tmp as oracle user:$ export TEMP=/directory$ export TMPDIR=/directory

l The Oracle Clusterware home requires 650 MB of disk space.l 4 GB of disk space for the Oracle software. You can determine the amount of free disk space on

the system using # bdf -k

Storage Option Clusterware Database RecoveryAutomatic Storage Management No Yes YesShared raw logical volumes (requires SGeRAC) Yes Yes No

Shared raw disk devices as presented to hosts Yes Yes No

Shared raw partitions (only HP Integrity, no PA-Risc) Yes Yes No

CFS (requires HP SG for RAC) Yes* Yes* Yes*

Page 4 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 5: Oracle RAC 11g R1 On HP-UX

l 1.2 GB of disk space for a preconfigured database that uses file system storage (optional)l Operating System: HP-UX 11.31 Itanium, HP-UX 11.31 PA-RISC, HP-UX 11.23 Itanium, HP-UX

11.23 PA-RISC. To determine if you have a 64-bit configuration enter the following command: # /bin/getconf KERNEL_BITSTo determine which version of HP-UX is installed, enter the following command:# uname -a

l Asnyc I/O is required for Oracle on RAW devices and configured on HP-UX 11.31 and 11.23 by default. You can check if you have the following file:# ll /dev/async # crw-rw-rw- 1 bin bin 101 0x000000 Jun 9 09:38 /dev/async

l If you want to use Oracle on RAW devices and Async I/O is not configured, then ¡ Create the /dev/async character device

# /sbin/mknod /dev/async c 101 0x0# chown oracle:dba /dev/async# chmod 660 /dev/async

¡ Configure the async driver in the kernel using SAM => Kernel Configuration=> Kernel=> the driver is called 'asyncdsk'Generate new kernelReboot

¡ Set HP-UX kernel parameter max_async_ports using SAM. max_async_ports limits the maximum number of processes that can concurrently use /dev/async. Set this parameter to the sum of 'processes' from init.ora + number of background processes. If max_async_ports is reached, subsequent processes will use synchronous i/o.

¡ Set HP-UX kernel parameter aio_max_ops using SAM. aio_max_ops limits the maximum number of asynchronous i/o operations that can be queued at any time. Set this parameter to the default value (2048), and monitor over time using glance

l For PL/SQL native compilation, Pro*C/C++, Oracle Call Interface, Oracle C++ Call Interface, Oracle XML Developer’s Kit (XDK):

¡ HP-UX 11iv3 (11.31):n HP C/aC++ A.03.74

n HP C/aC++ B.11.31.01

n GCC compiler gcc 4.1.2

¡ HP-UX 11i v2 (11.23): n HP/ANSI C compiler (B.11.11.16): C-ANSI-C

n C++ (aCC) Compiler (A.03.70): ACXX

n GCC compiler gcc 3.4.5

To determine the version, enter the following command: # cc -V

l To allow you to successfully relink Oracle products after installing this software, please ensure that the following symbolic links have been created (HP Doc-Id KBRC00003627): # cd /usr/lib# ln -s /usr/lib/libX11.3 libX11.sl# ln -s /usr/lib/libXIE.2 libXIE.sl# ln -s /usr/lib/libXext.3 libXext.sl# ln -s /usr/lib/libXhp11.3 libXhp11.sl# ln -s /usr/lib/libXi.3 libXi.sl# ln -s /usr/lib/libXm.4 libXm.sl# ln -s /usr/lib/libXp.2 libXp.sl# ln -s /usr/lib/libXt.3 libXt.sl

Page 5 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 6: Oracle RAC 11g R1 On HP-UX

# ln -s /usr/lib/libXtst.2 libXtst.sll Ensure that each member node of the cluster is set (as closely as possible) to the same date and

time. Oracle strongly recommends using the Network Time Protocol feature of most operating systems for this purpose, with all nodes using the same reference Network Time Protocol server.

4.2 Network Requirements

You need the following IP addresses per node to build a RAC11g cluster:

l Public interface that will be used for client communicationl Virtual IP address (VIP) that will be bind by Oracle Clusterware to the public interface.

(Why having this VIP? Well, clients will use this VIP addresses/names to access the RAC database. If a node or interconnect fails, then the affected VIP is relocated to the surviving instance, enabling fast notification of the failure to the clients connecting through that VIP -> prevents TCP/IP timeout!)

l Private interface that will be used for inter-cluster traffic. There are four major categories of inter-cluster traffic:

¡ SG-HB= Serviceguard heartbeat and communications traffic. This is supported over single or multiple subnet networks.

¡ CSS-HB = Oracle CSS heartbeat traffic and communications traffic for Oracle Clusterware. CSS-HB uses a single logical connection over a single subnet network.

¡ RAC-IC = RAC instance peer to peer traffic and communications for Global Cache Service (GCS) and Global Enqueue Service (GES), formally Cache Fusion (CF) and Distributed Lock Manager (DLM).

¡ GAB/LLT (only when using CFS/CVM) = Symantec cluster heartbeat and communications traffic. GAB/LLT communicatesover link level protocol (DLPI) and is supported over Serviceguard heartbeat subnet networks, including primary and standby links. GAB/LLT is not supported over APA or virtual LANs (VLAN).

When configuring these networks, please consider:

l The public and private interface names associated with the network adapters for each network should be the same on all nodes, e.g. lan0 for private interconnect and lan1 for public interconnect. If this is not the case, you can use the ioinit command to map the LAN interfaces to new device instances:

¡ Write down the hardware path that you want to use:# lanscanHardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPIPath Address In# State NamePPA ID Type Support Mjr#1/0/8/1/0/6/0 0x000F203C346C 1 UP lan1 snap1 1 ETHER Yes 1191/0/10/1/0 0x00306EF48297 2 UP lan2 snap2 2 ETHER Yes 119C

¡ Create a new ascii file with the following syntax:Hardware_Path Device_Group New_Device_Instance_NumberExample:# vi newio1/0/8/1/0/6/0 lan 81/0/10/1/0 lan 9

Please note that you have to choose a device instance number that is currently not in use.¡ Activate this configuration with the following command (-r option will issue a reboot):

# ioinit -f /root/newio -r¡ When the system is up again, check new configuration:

# lanscanHardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI

Page 6 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 7: Oracle RAC 11g R1 On HP-UX

Path Address In# State NamePPA ID Type Support Mjr#1/0/8/1/0/6/0 0x000F203C346C 1 UP lan8 snap8 1 ETHER Yes 1191/0/10/1/0 0x00306EF48297 2 UP lan9 snap9 2 ETHER Yes 119

l For the public network, ¡ each network adapter must support TCP/IP.

l For the private network, ¡ Oracle recommends to use a subnet reserved for private networks, such as 10.0.0.0 or

192.168.0.0.¡ private IP address and private network name must be registered in DNS or configured

in /etc/hosts file on each node.¡ the following interconnect technologies are currently supported (see also RAC Technology

Matrix for UNIX at http://www.oracle.com/technology/products/database/clustering/certify/tech_generic_unix_new.html

n UDP over 1Gbit Ethernetn IPoIB = IP protocol over Infiniband hardware

For our cluster, we used IP over Infiniband (Voltaire).¡ Crossover cables are not supported for the cluster interconnect; switch is mandatory for

production implementation, even for only 2 nodes architecture.¡ Please note that Oracle Clusterware Heartbeat timeout default ("misscount") is 30sec for

clusters without Serviceguard and 600 for clusters with Serviceguard. This ensures that Serviceguard will be first to recognize any failures and to initiate cluster reformation activities. (See Oracle Metalink Note 294430.1 "CSS Timeout Computation in RAC 10g (10g Release 1 and 10g Release 2)")

¡ In order to make this private interconnect high available, you can either use HP Serviceguard or HP Auto Port Aggregation (APA) LAN Monitor. Oracle does not provide any mechanism to make this private interconnect high availability.

¡ Please check the HP white paper "Sample Configuration with SGeRAC and Oracle RAC 11gR1" at http://docs.hp.com/en/12732/SGeRAC_11g_Sample_Config.pdf for more details regarding network layout considerations.

l For the virtual IP (VIP) address,¡ this must be on the same subnet as the public interface¡ VIP address and VIP host name must be currently unused (it can be registered in a DNS, but

should not be accessible by a ping command).¡ In order to make this VIP high available, please see Oracle Metalink Note 296874.1

"Configuring the HP-UX Operating System for the Oracle 10g VIP").l Ping all IP addresses. The public and private IP addresses should respond to ping commands. The

VIP addresses should not respond.

Useful network commands:

# lanscan # Determines the number of LAN interfaces on each node# netstat -in # Displays information for all network interfaces such as IP address, state, etc.# ifconfig lanX # Displays current configuration for a specific interface(Config File: /etc/rc.config.d/netconf)

4.3 Required HP-UX Patches

HP-UX Operating System Itanium: » HP-UX 11i Version 3 (11.31) with March 2007 Patch bundle for HP-UX (11iV2-B.11.23.0703) » HP-UX 11i Version 2 (11.23) with Sept 04 or newer base)

Page 7 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 8: Oracle RAC 11g R1 On HP-UX

HP-UX Operating System PA-RISC:

» HP-UX 11i V3 (11.31) PA-RISC » HP-UX 11i V2 (11.23) PA-RISC Sept 2004 base or later, with March 2007 Patch bundle for HP-UX

(11iV2- B.11.23.0703) HP-UX 11.31:

General Patches: » PHKL_37296 vfs module patch » PHKL_37452 vm cumulative patch [replaces PHKL_35900, PHKL_35936] » PHKL_37453 esdisk cumulative patch [replaces PHKL_36249] » PHKL_37454 esctl cumulative patch [replaces PHKL_36248] » PHCO_37476 libc cumulative patch » PHCO_37807 Cumulative Patch for bcheckrc » PHSS_37948 linker + fdp cumulative patch » PHSS_37954 Integrity Unwind Library » PHNE_35894 Networking commands cumulative patch

C and C++ patches for Pro*C/C++,Oracle Call Interface,Oracle C++ Call Interface, Oracle XML Developer’s Kit (XDK): » PHSS_35976 HP C/aC++ Compiler (A.06.14) Itanium

Serviceguard 11.18 Patches (optional, only if you want to use Serviceguard): » PHSS_37602 Serviceguard A.11.18.00

HP-UX 11.23

General Patches: » PHKL_33025 file system tunables cumulative patch » PHKL_34941 Improves Oracle Clusterware restart and diagnosis

» PHCO_32426 reboot(1M) cumulative patch » PHCO_36744 LVM patch [replaces PHCO_35524] » PHCO_37069 libsec cumulative patch » PHCO_37228 libc cumulative patch) [replaces PHCO_36673] » PHCO_38120 kernel configuration commands patch » PHKL_34213 vPars CPU migr, cumulative shutdown patch » PHKL_34989 getrusage(2) performance » PHKL_36319 mlockall(2), shmget(2) cumulative patch) [replaces PHKL_35478] » PHKL_36853 pstat patch » PHKL_37803 mpctl(2) options, manpage, socket count) [replaces PHKL_35767] » PHKL_37121 sleep kwakeup performance cumulative patch [replaces PHKL_35029] » PHKL_34840 slow system calls due to cache line sharing » PHSS_37947 linker + fdp cumulative patch) [replaces PHSS_35979] » PHNE_37395 cumulative ARPA Transport patch

Recommended Patches with Serviceguard 11.18 Patches (optional, only if you want to use Serviceguard):

» PHSS_37601 11.23 Serviceguard A.11.18.00 » PHKL_35420 Overtemp shutdown / Serviceguard failover Required if using VxVM, CVM or CFS 5.0:

» PHCO_35125, PHCO_35126, PHCO_35213, PHCO_35214, PHCO_35217, PHCO_35301, PHCO_35354, PHCO_35357, PHCO_35375, PHCO_36590, PHCO_36593, PHCO_37077, PHCO_37085, PHCO_37086, PHKL_37087

Required if using VxVM, CVM or CFS 4.1: » PHCO_35892, PHCO_36611, PHCO_37390, PHCO_37391, PHCO_37841, PHKL_37392,

PHKL_37840, PHCO_37228, PHKL_36516 [8 node support], PHNE_33723, PHNE_36531, PHNE_36532

Required for all clusters with CFS: » PHKL_37653, PHNE_36236 [Required on rx4640 and rp4440 nodes]

C and C++ patches for Pro*C/C++,Oracle Call Interface,Oracle C++ Call Interface, Oracle XML Developer’s Kit (XDK):

» PHSS_33279 u2comp/be/plugin library patch Itanium » PHSS_35974 HP C Compiler (A.06.14) Itanium » PHSS_35975 aC++ Compiler (A.06.14) Itanium » PHSS_37500 aC++ Runtime [replaces PHSS_35978] » PHSS_36089 ANSI C compiler B.11.11.16 cumulative patch [replaces PHSS_35101] PA-RISC » PHSS_36090 HP aC++ Compiler (A.03.77) [replaces PHSS_35102] PA-RISC » PHSS_36091 +O4/PBO Compiler B.11.11.16 cumulative patch [replaces PHSS_35103] PA-RISC » PHSS_35176 HP C Preprocessor B.11.11.16 patch PA-RISC

To ensure that the system meets these requirements, follow these steps:

Page 8 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 9: Oracle RAC 11g R1 On HP-UX

l HP provides patch bundles at http://www.software.hp.com/SUPPORT_PLUSl Individual patches can be downloaded from http://itresourcecenter.hp.com/l To determine which operating system patches are installed, enter the following command:

# /usr/sbin/swlist -l patchl To determine if a specific operating system patch has been installed, enter the following command:

# /usr/sbin/swlist -l patch <patch_number>l To determine which operating system bundles are installed, enter the following command:

# /usr/sbin/swlist -l bundle

4.4 Kernel Parameter Settings

Verify that the kernel parameters shown in the following table are set either to the formula shown, or to values greater than or equal to the recommended value shown. If the current value for any parameter is higher than the value listed in this table, do not change the value of that parameter.

Please check also our HP-UX kernel configuration for Oracle databases for more details and for the

Parameter Recommended Formula or Value nproc 4096 ksi_alloc_max (nproc*8)

executable_stack 0max_thread_proc 1024 maxdsiz 1073741824 (1 GB)

maxdsiz_64bit 2147483648 (2 GB) maxssiz 134217728 (128 MB) maxssiz_64bit 1073741824 (1 GB)

maxuprc ((nproc*9)/10) msgmap (not valid with 11.31) (msgtql+2)

msgmni nprocmsgseg (nproc*4); at least 32767 msgtql nproc

ncsize ninode+1024nfile (15*nproc+2048); for Oracle installations with a high number of data files this might not be

enough, then use (number of Oracle processes)*(number of Oracle data files) + 2048nflocks nproc

ninode (8*nproc+2048) nkthread (((nproc*7)/4)+16)

semmni nprocsemmns (semmni*2) semmnu (nproc-4)

semvmx 32767shmmax The size of physical memory or 1073741824 (0X40000000), whichever is greater.

Note: To avoid performance degradation, the value should be greater than or equal to the size of the SGA.

shmmni 512 shmseg 120

vps_ceiling 64 (up to 16384 = 16MB for large SGA)

Page 9 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 10: Oracle RAC 11g R1 On HP-UX

latest recommendations.You can modify the kernel settings either by using HP-UX System Management Homepage, kcweb Application (/usr/sbin/kcweb -F) or by using the kctune command line utility (kmtune on PA-RISC).For System Management Homepage, just visit http://<nodename>:2301.For kctune, you can use below commands:# kctune > /tmp/kctune.log (lists all current kernel settings)# kctune tunable>=value The tunable's value will be set to value, unless it is already greater# kctune -D > /tmp/kctune.log (Restricts output to only those parameters which have changes being held until next boot)

5. Create the Oracle User

l Log in as the root userl Create database groups on each node. The group ids must be unique. The id used here are just

examples, you can use any group id not used on any of the cluster nodes. ¡ the OSDBA group, typically dba:

ksc/schalke# /usr/sbin/groupadd -g 201 dba¡ the optional ORAINVENTORY group, typically oinstall; this group owns the Oracle inventory,

which is a catalog of all Oracle software installed on the system. ksc/schalke# /usr/sbin/groupadd -g 200 oinstall

l Create the Oracle software user on each node. The user id must be unique. The user id used below is just an example, you can use any id not used on any of the cluster nodes.ksc# /usr/sbin/useradd -u 200 -g oinstall -G dba oracle

l Check User:ksc# id oracleuid=203(oracle) gid=103(oinstall) groups=101(dba),104(oper)

l Create HOME directory for Oracle userksc/schalke# mkdir /home/oracleksc/schalke# chown oracle:oinstall /home/oracle

l Change Password on each node:ksc/schalke# passwd oracle

l During the installation of Oracle RAC, the Oracle Universal Installer (OUI) needs to copy files to and execute programs on the other nodes in the cluster. In order to allow OUI to do that, you must configure SSH or RCP to allow the execution of programs on other nodes in the cluster without requiring password prompts. SSH Set-upksc/schalke$ mkdir ~/.sshksc/schalke$ chmod 755 ~/.sshksc/schalke$ /usr/bin/ssh-keygen -t rsa

Here, we leave the passphrase empty.Your identification has been saved in /home/oracle/.ssh/id_rsa.Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.

Next, contents of id_rsa.pub file of both nodes ksc and schalke need to be put into a file called /home/oracle/.ssh/authorized_keys on both nodes.ksc$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysksc$ ssh oracle@schalke cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysksc$ chmod 644 ~/.ssh/authorized_keysksc$ scp ~/.ssh/authorized_keys schalke:~/.ssh/authorized_keys Next, test connectivity in each direction from all servers.

Page 10 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 11: Oracle RAC 11g R1 On HP-UX

ksc$ ssh ksc lsksc$ ssh schalke lsschalke$ ssh ksc lsschalke$ ssh schalke lsThat will ensure that messages like the one below do not occur when the OUI attempts to copy files. This message will only occur the first time an operation on a remote node is performed, so by testing the connectivity, you not only ensure that remote operations work properly, you also complete the initial security key exchange. The authenticity of host 'schalke (15.136.24.82)' can't be established.RSA key fingerprint is 80:72:ee:bf:0e:85:92:aa:b6:c0:10:9a:33:df:81:31.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'schalke,15.136.24.82' (RSA) to the list of known hosts. RCP Set-up

Include the following lines in the .rhosts file in oracle’s home directory :

# .rhosts file in $HOME of oracle ksc oracleksc.domain oracleschalke oracleschalke.domain oracleNote: rcp only works if for the oracle user a password has been set. You can test whether it is working with:ksc$ remsh schalke llksc$ remsh ksc llschalke$ remsh schalke llschalke$ remsh ksc ll

6. Oracle RAC 11g Cluster Preparation Steps

The cluster configuration steps vary depending on the chosen RAC 11g cluster model. Therefore, we have split this section in respective sub chapters. Please follow the instructions that apply to your chosen deployment model. In this section, we provide examples of command sequences that can be used to prepare the cluster. All examples demonstrate how storage is configured using new Next Generation Mass storage Stack introduced with HP-UX 11.31. This new I/O stack provides native multi-pathing & load balancing, as well as agile and persistent addressing. Using the agile address, HP-UX will automatically and transparently use the redundant path for the a LUN in the background.

6.1 RAC 11g with ASM over SLVM

To use shared raw logical volumes, HP Serviceguard Extensions for RAC must be installed on all cluster nodes. For Oracle RAC11g on HP-UX with ASM over SLVM, please note:

l The following files can be placed in an ASM disk group: DATAFILE, CONTROLFILE, REDOLOG, ARCHIVELOG and SPFILE. You cannot put any other files such as Oracle binaries, or the two Oracle Clusterware files (OCR & Voting) into an ASM disk group. This is because they must be accessible before Oracle ASM starts.

l At this deployment option, HP Serviceguard Extension for RAC is used which provides shared logical volumes (Shared Logical Volume Manager is a feature of SGeRAC). Each ASM disk group member is a SLVM raw logical volume.

l ASM is not providing multi-pathing capabilities. This can be provided by HP-UX 11.31 or by SLVM (“pvlinks”).

l SLVM enables the HP-UX devices used for OCR and Voting to have the same names on all nodes.l ASM over SLVM configurations provides additional protection layer by ensuring ASM data cannot

be inadvertently overwritten from nodes inside/outside the cluster.l Online Reconfiguration possible: VG must be activated only on one node (Single Node Online

Page 11 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 12: Oracle RAC 11g R1 On HP-UX

Volume Reconfiguration, "SNOR”. See technical white paper at http://docs.hp.com/en/7389/LVM_SNOR_whitepaper.pdf)

l This set-up requires Oracle RAC with Enterprise Edition. (Oracle RAC with Standard Edition requires ASM and no 3rd party Clusterware)

l Note: Single Instance ASM is currently not supported with SG, and SGeRAC!l All of the devices (» here shared logical volumes) in an ASM disk group should be the same size

and have the same performance characteristics.l Choose the redundancy level for the ASM disk group(s). The redundancy level that you choose for

the ASM disk group determines how ASM mirrors files in the disk group and determines the number of disks and amount of disk space that you require, as follows:

¡ External redundancy: An external redundancy disk group requires a minimum of one disk device. Typically you choose this redundancy level if you have an intelligent subsystem such as an HP StorageWorks EVA or HP StorageWorks XP.

¡ Normal redundancy: In a normal redundancy disk group, ASM uses two-way mirroring by default, to increase performance and reliability. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups).

¡ High redundancy: In a high redundancy disk group, ASM uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups).

The idea is that ASM provides the mirroring, striping, slicing and dicing functionality as needed and SLVM supplies the multipathing functionality not provided by ASM.

6.1.1 SLVM Configuration Before continuing, check the following ASM-over-SLVM configuration guidelines:

l organize the disks/LUNs to be used by ASM into LVM volume groups (VGs)l ensure that there are multiple paths to each disk, by using HP-UX 11.31 native multipathing or PV

Linksl for each physical volume (PV), configure a logical volume (LV) using up all available space on that

PVl the ASM logical volumes should not be striped or mirrored, should not span multiple PVs, and

should not share a PV with LVs corresponding to other disk group members as ASM provides these features and SLVM supplies only the missing functionality (chiefly multipathing)

Page 12 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 13: Oracle RAC 11g R1 On HP-UX

l on each LV, set an I/O timeout equal to (# of PV Links) *(PV timeout)l export the VG across the cluster and mark it shared

For an ASM database configuration on top of SLVM, you need shared logical volumes for the two Oracle Clusterware files OCR and Voting plus shared logical volumes for Oracle ASM.

This ASM-over-SLVM configuration enables the HP-UX devices used for disk group members to have the same names on all nodes, easing ASM configuration.

In this cookbook, OCR and Voting will be put onto a volume group "vg_oraclu" which resides on disk /dev/rdisk/disk30.ASM will be put onto a a volume group "vg_oraclu" using disks /dev/rdisk/disk40 and /dev/rdisk/disk41.

l Disks need to be properly initialized before being added into volume groups. Do the following step for all the disks (LUNs) you want to configure for your RAC volume group(s) from node ksc:ksc# pvcreate –f /dev/rdisk/disk30

ksc# pvcreate –f /dev/rdisk/disk40ksc# pvcreate –f /dev/rdisk/disk41

l Create the volume group directory with the character special file called group:ksc# mkdir /dev/vg_oracluksc# mknod /dev/vg_oraclu/group c 64 0x050000

ksc# mkdir /dev/vgasmksc# mknod /dev/vgasm/group c 64 0x060000Note: <0x050000> and <0x060000> are the minor numbers in this example. This minor number for each group file must be unique among all the volume groups on the system.

l Create VG (optionally using PV-LINKs) and extend the volume group: ksc# vgcreate /dev/vg_oraclu /dev/disk/disk30

ksc# vgcreate /dev/vgasm /dev/disk/disk40ksc# vgextend /dev/vgasm /dev/disk/disk41

l Create LVs for OCR and Voting:ksc# lvcreate –L 260 –n ora_ocr1_256m /dev/vg_oracluksc# lvcreate –L 260 –n ora_vote1_256m /dev/vg_oraclu

l Create zero length LVs for each of the ASM physical volumes:

ksc# lvcreate -n ora_asm1_10g vgasmksc# lvcreate -n ora_asm2_10g vgasm

l Extend each LV to the maximum size possible on that PV (the number of extents available in a PV can be determined via vgdisplay -v <vgname>), and mark the LV as contiguous:ksc# lvchange –C y /dev/vgasm/ora_asm1_10gksc# lvchange –C y /dev/vgasm/ora_asm2_10g

Create a Raw Device for: File Size: Sample Name: Comments:OCR (Oracle Cluster Registry) [1/2]

256 MB ora_ocrn_256m Oracle lets you have redundant copies for OCR. In this case you need two shared logical volumes. n = 1 or 2. For HA reasons, they should not be on same set of disks.

Oracle CRS voting disk[1/3/..]

256 MB ora_voten_256m Oracle lets you have 3+ redundant copies of Voting. In this case you need 3+ shared logical volumes. n = 1 or 3 or 5 .... For HA reasons, they should not be on same set of disks.

ASM Volume #1 .. n 10GB ora_asmn_10g

Page 13 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 14: Oracle RAC 11g R1 On HP-UX

l Extend each LV to the full length allowed by the corresponding PV, in this case 2900 extents:

ksc# lvextend -l 2900 /dev/vgasm/ora_asm1_10g /dev/rdisk/disk40 ksc# lvextend -l 2900 /dev/vgasm/ora_asm2_10g /dev/rdisk/disk41

l Configure LV timeouts, based on the PV timeout and number of physical paths. If a PV timeout has been explicitly set, its value can be displayed via pvdisplay -v. If not, pvdisplay will show a value of default, indicating that the timeout is determined by the underlying disk driver. For SCSI, in HP-UX 11i v3, the default timeout is 30 seconds. In case of 2 paths to each disk, the LV timeout is 60 seconds:

ksc# lvchange -t 60 /dev/vgasm/ora_asm1_10gksc# lvchange -t 60 /dev/vgasm/ora_asm2_10g

l Null out the initial part of each LV to ensure ASM accepts the # LV as an ASM disk group member:

ksc# dd if=/dev/zero of=/dev/vgasm/ora_asm1_10g bs=8192 count=12800 ksc# dd if=/dev/zero of=/dev/vgasm/ora_asm2_10g bs=8192 count=12800

l Check to see if your volume groups are properly created and available:ksc# strings /etc/lvmtabksc# vgdisplay –v /dev/vg_oracluksc# vgdisplay –v /dev/vgasm

l Export the volume group:¡ De-activate the volume groups:

ksc# vgchange –a n /dev/vg_oracluksc# vgchange –a n /dev/vgasm

¡ Create the volume group map file:ksc# vgexport –v –p –s –m vgoraclu.map /dev/vg_oracluksc# vgexport –v –p –s –m vgasm.map /dev/vgasm

¡ Copy the mapfile to all the nodes in the cluster:ksc# rcp vgoraclu.map schalke:/tmp/scripts ksc# rcp vgasm.map schalke:/tmp/scripts

l Import the volume groups on the second node in the cluster¡ Create a volume group directory with the character special file called group:

schalke# mkdir /dev/vg_oracluschalke# mknod /dev/vg_oraclu/group c 64 0x050000

schalke# mkdir /dev/vgasmschalke# mknod /dev/vgasm/group c 64 0x060000Note: The minor number has to be the same as on the other node.

¡ Import the volume group:schalke# vgimport –v –s -N –m /tmp/scripts/vgoraclu.map /dev/vg_oracluschalke# vgimport –v –s -N –m /tmp/scripts/vgasm.map /dev/vgasmNote: the -N option is for HP-UX 11.31 agile addressing

¡ Check to see if devices are imported:schalke# strings /etc/lvmtab

l Disable automatic volume group activation on all cluster nodes by setting AUTO_VG_ACTIVATE to 0 in file /etc/lvmrc. This ensures that shared volume group vgasm is not automatically activated at system boot time. In case you need to have any other volume groups activated, you need to explicitly list them at the customized volume group activation section.

l Change the permissions of the ASM volume group vg_asm to 777, and change the permissions of all raw logical volumes to 660 and the owner to oracle:dba.ksc/schalke# chmod 777 /dev/vgasmksc/schalke# chmod 660 /dev/vgasm/r* ksc/schalke# chown oracle:dba /dev/vgasm/r*

Page 14 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 15: Oracle RAC 11g R1 On HP-UX

l Change the permissions of vg_oraclu and its SLVM: ksc/schalke# chmod 777 /dev/vg_oracluksc/schalke# chown root:dba /dev/vg_oraclu/rora_ocr1_256mksc/schalke# chmod 640 /dev/vg_oraclu/rora_ocr1_256mksc/schalke# chown oracle:dba /dev/vg_oraclu/rora_vote1_256m ksc/schalke# chmod 660 /dev/vg_oraclu/rora_vote1_256m

6.1.2 SG/SGeRAC Configuration

After SLVM set-up, you can now start the Serviceguard cluster configuration.

In general, you can configure your Serviceguard cluster using lock disk or quorum server. We describe here the cluster lock disk set-up. Since we have already configured one volume group for the RAC cluster vg_oraclu, we use vg_oraclu for the lock volume as well.

l Activate the lock disk on the configuration node ONLY. Lock volume can only be activated on the node where the cmapplyconf command is issued so that the lock disk can be initialized accordingly.ksc# vgchange -a y /dev/vg_oraclu

l Create a cluster configuration template:ksc# cmquerycl –n ksc –n schalke –v –C /etc/cmcluster/rac.asc

l Edit the cluster configuration file (rac.asc).Make the necessary changes to this file for your cluster. For example, change the Cluster Name, adjust the heartbeat interval and node timeout to prevent unexpected failovers due to RAC traffic. Configure all shared volume groups that you are using for RAC, including the volume group that contains the Oracle Clusterware files using the parameter OPS_VOLUME_GROUP at the bottom of the file. Also, ensure to have the right lan interfaces configured for the SG heartbeat according to chapter 4.2.

l Check the cluster configuration: ksc# cmcheckconf -v -C rac.asc

l Create the binary configuration file and distribute the cluster configuration to all the nodes in the cluster: ksc# cmapplyconf -v -C rac.ascNote: the cluster is not started until you run cmrunnode on each node or cmruncl.

l De-activate the lock disk on the configuration node after cmapplyconf ksc# vgchange -a n /dev/vg_oraclu

l Start the cluster and view it to be sure its up and running. See the next section for instructions on starting and stopping the cluster.

How to start up the cluster:

l Start the cluster from any node in the clusterksc# cmruncl -v Or, on each nodeksc# cmrunnode -v

l Make all RAC volume groups and Cluster Lock volume groups sharable and cluster aware (not packages) from the cluster configuration node. This has to be done only once.ksc# vgchange -S y -c y /dev/vg_oraclu

l Then on all the nodes, activate the volume group in shared mode in the cluster. This has to be done each time when you start the cluster. ksc# vgchange -a s /dev/vg_oraclu

l Check the cluster status:

Page 15 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 16: Oracle RAC 11g R1 On HP-UX

ksc# cmviewcl –v

How to shut down the cluster (not needed here):

l Shut down the RAC instances (if up and running) l On all the nodes, deactivate the volume group in shared mode in the cluster:

ksc# vgchange –a n /dev/vg_oraclul Halt the cluster from any node in the cluster

ksc# cmhaltcl –v l Check the cluster status:

ksc# cmviewcl –v

6.2 RAC 11g with RAW over SLVM 6.2.1 SLVM Configuration To use shared raw logical volumes, HP Serviceguard Extensions for RAC must be installed on all cluster nodes.For a basic database configuration with SLVM, the following shared logical volumes are required. Note that in this scenario, only one SLVM volume group is used for both Oracle Clusterware and database files. In cluster environments with more than one RAC database, it is recommended to have separate SLVM volume groups for Oracle Clusterware and for each RAC database.

Create a Raw Device for: File Size: Sample Name:<dbname> should be replaced with your database name.

Comments:

OCR (Oracle Cluster Repository)

256 MB ora_ocr_256m You need to create this raw logical volume only once on the cluster. If you create more than one database on the cluster, they all share the same OCR.

Oracle Voting disk 256 MB ora_vote_256m You need to create this raw logical volume only once on the cluster. If you create more than one database on the cluster, they all share the same Oracle voting disk.

SYSTEM tablespace 508 MB <dbname>_system_508m SYSAUX tablespace 300 + (Number

of instances * 250)

<dbname>_sysaux_808m New system-managed tablespace that contains performance data and combines content that was stored in different tablespaces (some of which are no longer required) in earlier releases. This is a required tablespace for which you must plan disk space.

One Undo tablespace per instance

508 MB <dbname>_undotbsn_508m One tablespace for each instance, where n is the number of the instance

EXAMPLE tablespace 168 MB <dbname>_example_168m USERS tablespace 128 MB <dbname>_users_128m Two ONLINE Redo log files per instance

128 MB <dbname>_redonm_128m n is instance number and m the log number

First and second control file 118 MB <dbname>_control[1|2]_118m

TEMP tablespace 258 MB <dbname>_temp_258m Server parameter file 5 MB <dbname>_spfile_raw_5m

Page 16 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 17: Oracle RAC 11g R1 On HP-UX

l Disks need to be properly initialized before being added into volume groups. Do the following step for all the disks (LUNs) you want to configure for your RAC volume group(s) from node ksc:ksc# pvcreate –f /dev/rdisk/disk40

l Create the volume group directory with the character special file called group:ksc# mkdir /dev/vg_racksc# mknod /dev/vg_rac/group c 64 0x060000Note: <0x060000> is the minor number in this example. This minor number for the group file must be unique among all the volume groups on the system.

l Create VG and extend the volume group: ksc# vgcreate /dev/vg_rac /dev/disk/disk40ksc# vgextend /dev/vg_rac /dev/disk/disk41 Continue with vgextend until you have included all the needed disks for the volume group(s).

l Create logical volumes as shown in the table above for the RAC database with the commandksc# lvcreate –i 10 –I 1024 –L 100 –n Name /dev/vg_rac-i: number of disks to stripe across-I: stripe size in kilobytes -L: size of logical volume in MB

l Check to see if your volume groups are properly created and available:ksc# strings /etc/lvmtabksc# vgdisplay –v /dev/vg_rac

l Export the volume group:¡ De-activate the volume group:

ksc# vgchange –a n /dev/vg_rac ¡ Create the volume group map file:

ksc# vgexport –v –p –s –m mapfile /dev/vg_rac ¡ Copy the mapfile to all the nodes in the cluster:

ksc# rcp mapfile schalke:/tmp/scripts l Import the volume group on the second node in the cluster

¡ Create a volume group directory with the character special file called group:schalke# mkdir /dev/vg_racschalke# mknod /dev/vg_rac/group c 64 0x060000Note: The minor number has to be the same as on the other node.

¡ Import the volume group:schalke# vgimport –v –s -N –m /tmp/scripts/mapfile /dev/vg_racNote: the -N option is for HP-UX 11.31 agile addressingNote: The minor number has to be the same as on the other node.

¡ Check to see if devices are imported:schalke# strings /etc/lvmtab

l Disable automatic volume group activation on all cluster nodes by setting AUTO_VG_ACTIVATE to 0 in file /etc/lvmrc. This ensures that shared volume group vg_rac is not automatically activated at system boot time. In case you need to have any other volume groups activated, you need to explicitly list them at the customized volume group activation section.

l It is recommended best practice to create symbolic links for each of these raw files on all systems of your RAC cluster. ksc/schalke# cd /oracle/RAC/ (directory where you want to have the links)ksc/schalke# ln -s /dev/vg_rac/<dbname>_system_508 system ksc/schalke# ln -s /dev/vg_rac/<dbname>_users_128m user

(SPFILE):Password file 5 MB <dbname>_pwdfile_5m

Page 17 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 18: Oracle RAC 11g R1 On HP-UX

etc. l Change the permissions of the database volume group vg_rac to 777, and change the permissions

of all raw logical volumes to 660 and the owner to oracle:oinstall.ksc/schalke# chmod 777 /dev/vg_racksc/schalke# chmod 660 /dev/vg_rac/r* ksc/schalke# chown oracle:dba /dev/vg_rac/r*

l Change the permissions of the OCR logical volumes: ksc/schalke# chown root:dba /dev/vg_rac/rora_ocr_256mksc/schalke# chmod 640 /dev/vg_rac/rora_ocr_256m

l Optional: To enable Database Configuration Assistant (DBCA) later to identify the appropriate raw device for each database file, you must create a raw device mapping file, as follows:

¡ Set the ORACLE_BASE environment variable :ksc/schalke$ export ORACLE_BASE=/opt/oracle/product

¡ Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:ksc/schalke# mkdir -p $ORACLE_BASE/oradata/<dbname>ksc/schalke# chown -R oracle:oinstall $ORACLE_BASE/oradataksc/schalke# chmod -R 775 $ORACLE_BASE/oradata

¡ Change directory to the $ORACLE_BASE/oradata/dbname directory.¡ Enter a command similar to the following to create a text file that you can use to create the

raw device mapping file: ksc# find /dev/vg_rac -user oracle -name 'r<dbname>' -print > dbname_raw.conf

¡ Create the dbname_raw.conf file that looks similar to the following:system=/dev/vg_rac/r<dbname>_system_508msysaux=/dev/vg_rac/r<dbname>_sysaux_808mexample=/dev/vg_rac/r<dbname>_example_168musers=/dev/vg_rac/r<dbname>_users_128mtemp=/dev/vg_rac/r<dbname>_temp_258mundotbs1=/dev/vg_rac/r<dbname>_undotbs1_508mundotbs2=/dev/vg_rac/r<dbname>_undotbs2_508mredo1_1=/dev/vg_rac/r<dbname>_redo11_128mredo1_2=/dev/vg_rac/r<dbname>_redo12_128mredo2_1=/dev/vg_rac/r<dbname>_redo21_128mredo2_2=/dev/vg_rac/r<dbname>_redo22_128mcontrol1=/dev/vg_rac/r<dbname>_control1_118mcontrol2=/dev/vg_rac/r<dbname>_control2_118mspfile=/dev/vg_rac/r<dbname>_spfile_5mpwdfile=/dev/vg_rac/r<dbname>_pwdfile_5m

¡ When you are configuring the Oracle user's environment later in this chapter, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file:ksc$ export DBCA_RAW_CONFIG=$ORACLE_BASE/oradata/dbname/dbname_raw.conf

6.2.2 SG/SGeRAC Configuration

After SLVM set-up, you can now start the Serviceguard cluster configuration.

In general, you can configure your Serviceguard cluster using lock disk or quorum server. We describe here the cluster lock disk set-up. Since we have already configured one volume group for the entire RAC cluster vg_rac (see last chapter 5.2.1), we use vg_rac for the lock volume as well.

l Activate the lock disk on the configuration node ONLY. Lock volume can only be activated on the node where the cmapplyconf command is issued so that the lock disk can be initialized accordingly.ksc# vgchange -a y /dev/vg_rac

l Create a cluster configuration template:ksc# cmquerycl –n ksc –n schalke –v –C /etc/cmcluster/rac.asc

l Edit the cluster configuration file (rac.asc).Make the necessary changes to this file for your cluster. For example, change the Cluster Name,

Page 18 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 19: Oracle RAC 11g R1 On HP-UX

adjust the heartbeat interval and node timeout to prevent unexpected failovers due to DLM traffic. Configure all shared volume groups that you are using for RAC, including the volume group that contains the Oracle CRS files using the parameter OPS_VOLUME_GROUP at the bottom of the file. Also, ensure to have the right lan interfaces configured for the SG heartbeat according to chapter 4.2.

l Check the cluster configuration: ksc# cmcheckconf -v -C rac.asc

l Create the binary configuration file and distribute the cluster configuration to all the nodes in the cluster: ksc# cmapplyconf -v -C rac.ascNote: the cluster is not started until you run cmrunnode on each node or cmruncl.

l De-activate the lock disk on the configuration node after cmapplyconf ksc# vgchange -a n /dev/vg_rac

l Start the cluster and view it to be sure its up and running. See the next section for instructions on starting and stopping the cluster.

How to start up the cluster:

l Start the cluster from any node in the clusterksc# cmruncl -v Or, on each nodeksc/schalke# cmrunnode -v

l Make all RAC volume groups and Cluster Lock volume groups sharable and cluster aware (not packages) from the cluster configuration node. This has to be done only once.ksc# vgchange -S y -c y /dev/vg_rac

l Then on all the nodes, activate the volume group in shared mode in the cluster. This has to be done each time when you start the cluster. ksc# vgchange -a s /dev/vg_rac

l Check the cluster status: ksc# cmviewcl –v

How to shut down the cluster (not needed here):

l Shut down the RAC instances (if up and running) l On all the nodes, deactivate the volume group in shared mode in the cluster:

ksc# vgchange –a n /dev/vg_rac l Halt the cluster from any node in the cluster

ksc# cmhaltcl –v l Check the cluster status:

ksc# cmviewcl –v

6.3 RAC 11g with ASM over RAW

As already mentioned above, a new I/O infrastructure that enables the native built-in multipathing functionality is included in HP-UX 11i v3. This feature offers users a continuous I/O access to a raw disk/LUN if any of the paths fails.

Due to this critical feature, SG/SGeRAC introduced support for ASM on RAW with HP-UX 11.31. It remains unsupported for earlier versions of HP-UX.

Page 19 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 20: Oracle RAC 11g R1 On HP-UX

SG/SGeRAC requires the usage of the newly introduced agile addressing providing critical multipathing.

For Oracle RAC11g on HP-UX with ASM, please note:

l The following files can be placed in an ASM disk group: DATAFILE, CONTROLFILE, REDOLOG, ARCHIVELOG and SPFILE. You cannot put any other files such as Oracle binaries, or the two Oracle Clusterware files (OCR & Voting) into an ASM disk group.

l You cannot use Automatic Storage Management to store Oracle Clusterware files (OCR + Voting). This is because they must be accessible before Oracle ASM starts.

l As this deployment option is not using HP Serviceguard Extension for RAC, you cannot configure shared logical volumes (Shared Logical Volumer Manager is a feature of SGeRAC).

l For Oracle RAC with Standard Edition installations, ASM is the only supported storage option for database or recovery files.

l All of the devices in an ASM disk group should be the same size and have the same performance characteristics.

l Choose the redundancy level for the ASM disk group(s). The redundancy level that you choose for the ASM disk group determines how ASM mirrors files in the disk group and determines the number of disks and amount of disk space that you require, as follows:

¡ External redundancy: An external redundancy disk group requires a minimum of one disk device. Typically you choose this redundancy level if you have an intelligent subsystem such as an HP StorageWorks EVA or HP StorageWorks XP.

¡ Normal redundancy: In a normal redundancy disk group, ASM uses two-way mirroring by default, to increase performance and reliability. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups).

¡ High redundancy: In a high redundancy disk group, ASM uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups).

To configure raw disks for database file storage, follow the following steps:

l To make sure that the disks are available, enter the following command on every node: ksc/schalke# ioscan -fnNkCdiskNote: the -N option is for HP-UX 11.31 agile addressing

l If the ioscan command does not display device name information for a device that you want to use, enter the following command to install the special device files for any new devices: ksc/schalke# insf -e (please note, this command does reset the permissions to root for already existing device files, e.g. ASM disks!!)

l The disk names for the same disk can be different on the node. A disk can be identified as the same one via the WWID. The WWID of a disk can be checked via the following command:ksc/schalke# scsimgr lun_map -D /dev/rdisk/disk25 | grep WWID

Raw Disk for: File Size: Comments:OCR (Oracle Cluster Registry) [1/2] 256 MB Oracle lets you have multiple

redundant copies for OCR. In this case you need two shared logical volumes. n = 1 or 2. For HA reasons, they should not be on same set of disks.

Oracle CRS voting disk [1/3/..] 256 MB Oracle lets you have 3+ redundant copies of Voting. In this case you need 3+ shared logical volumes. n = 1 or 3 or 5 .... For HA reasons, they should not be on same set of disks.

ASM Disk #1 .. n 10GB Disks 1 .. n

Page 20 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 21: Oracle RAC 11g R1 On HP-UX

The System Management Homepage also shows the WWID for each disk:

l For each disk that you want to use, enter the following command on any node to verify that it is not already part of an LVM volume group: ksc# pvdisplay /dev/disk/disk25If this command displays volume group information, the disk is already part of a volume group. The disks that you choose must not be part of an LVM volume group.

l We recommend to create a special Oracle device directory, and to use mknod to create device paths in this special Oracle folder. This has the advantage, that you get same names for OCR and Voting files across all nodes in the cluster. In addition, it ensures that the permissions of these Oracle device files remain untouched of ‘insf –e’.

Example for one ASM file:# mkdir /dev/oracle# ll /dev/rdisk/disk25crw-r----- 1 bin sys 23 0x000019 Jan 16 12:16 /dev/rdisk/disk25# mknod /dev/oracle/asmdisk1 c 23 0x000019

Later during set-up, for the ASM instance, set the ASM_DISKSTRING parameter to /dev/oracle/* Now when 'insf -e' is run, it only touches 'standard' DSF, rather than this special ones.

l Modify the owner, group, and permissions on the character raw device files on all nodes: ¡ ASM & Voting disks:

ksc/schalke# chown oracle:dba /dev/oracle/*ksc/schalke# chmod 660 /dev/oracle/*

¡ OCR: ksc/schalke# chown root:dba /dev/oracle/OCRksc/schalke# chmod 640 /dev/oracle/OCR

Optional: ASM Failure Groups:Oracle lets you configure so-called failure groups for the ASM disk group devices. If you intend to use a

Page 21 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 22: Oracle RAC 11g R1 On HP-UX

normal or high redundancy disk group, you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure. To avoid failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.

l Please note that you cannot create ASM failure groups using DBCA but you have to manually create them by connecting to one ASM instance and using the following sql commands:$ export ORACLE_SID=+ASM1$ sqlplus / as sysdbaSQL> startup nomountSQL> create diskgroup DG1 normal redundancy2 FAILGROUP FG1 DISK '/dev/rdisk/disk30' name disk30,3 '/dev/rdisk/disk31' name disk314 FAILGROUP FG2 DISK '/dev/rdisk/disk40' name disk40,5 '/dev/rdisk/disk41' name disk41;DISKGROUP CREATEDSQL> shutdown immediate;

Useful ASM v$ views commands:

6.3.2 SG/SGeRAC Configuration

After disk set-up, you can now start the Serviceguard cluster configuration.

In general, you can configure your Serviceguard cluster using lock disk or quorum server. Here, we use the Quorum Server that uses a server program running on a separate system for tie-breaking. Should equal sized groups of nodes become separated from each other, the quorum server allows one group to achieve quorum and form the cluster, while the other group is denied quorum and cannot start a cluster. This quorum server software can be downloaded for free from http://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=B8467BA.

l Create a cluster configuration template:ksc# cmquerycl –n ksc –n schalke –v –C /etc/cmcluster/rac.asc

l Edit the cluster configuration file (rac.asc).Make the necessary changes to this file for your cluster. For example, change the Cluster Name, adjust the heartbeat interval and node timeout to prevent unexpected failovers due to DLM traffic. Configure all shared volume groups that you are using for RAC, including the volume group that contains the Oracle CRS files using the parameter OPS_VOLUME_GROUP at the bottom of the file. Also, ensure to have the right lan interfaces configured for the SG heartbeat according to chapter 4.2.In addition, confgure here the quorum server.

l Check the cluster configuration: ksc# cmcheckconf -v -C rac.asc

l Create the binary configuration file and distribute the cluster configuration to all the nodes in the cluster: ksc# cmapplyconf -v -C rac.asc

View ASM Instance DB InstanceV$ASM_CLIENT Shows each database instance using an ASM disk group Shows the ASM instance if the database has open ASM files.V$ASM_DISK Shows disk discovered by the ASM instance, including

disks which are not part of any disk group.Shows a row for each disk in disk groups in use by the database instance.

V$ASM_DISKGROUP Shows disk groups discovered by the ASM instance. Shows each disk group mounted by the local ASM instance.V$ASM_FILE Displays all files for each ASM disk group Returns no rows

Page 22 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 23: Oracle RAC 11g R1 On HP-UX

Note: the cluster is not started until you run cmrunnode on each node or cmruncl. l Start the cluster and view it to be sure its up and running. See the next section for instructions on

starting and stopping the cluster.

How to start up the cluster:

l Start the cluster from any node in the clusterksc# cmruncl -v Or, on each nodeksc/schalke# cmrunnode -v

l Check the cluster status: ksc# cmviewcl –v

How to shut down the cluster (not needed here):

l Shut down the RAC instances (if up and running) l Halt the cluster from any node in the cluster

ksc# cmhaltcl –vl Check the cluster status:

ksc# cmviewcl –v

7. Preparation for Oracle Software Installation

The Oracle RAC11g installation requires you to perform a two-phase process in which you run the Oracle Universal Installer (OUI) twice. The first phase installs Oracle Clusterware and the second phase installs the Oracle Database 11g software with RAC.

In case that you have downloaded the software you might have the following files:

l hpia64_11gR1_clusterware.zip Oracle Clusterwarel hpia64_11gR1_database_1of2.zip Oracle Database Softwarel hpia64_11gR1_database_2of2.zip Oracle Database Software

You can unpack the software with the following commands as root user:ksc# /usr/local/bin/unzip hpia64_11gR1_clusterware.zip 7.1 Prepare HP-UX Systems for Oracle software installation

l On HP-UX, most processes use a time-sharing scheduling policy. Time sharing can have detrimental effects on Oracle performance by descheduling an Oracle process during critical operations, for example, when it is holding a latch. HP-UX has a modified scheduling policy, referred to as SCHED_NOAGE, that specifically addresses this issue. Unlike the normal time-sharing policy, a process scheduled using SCHED_NOAGE does not increase or decrease in priority, nor is it preempted.

This feature is suited to online transaction processing (OLTP) environments because OLTP environments can cause competition for critical resources. The use of the SCHED_NOAGE policy with Oracle Database can increase performance by 10 percent or more in OLTP environments.

The SCHED_NOAGE policy does not provide the same level of performance gains in decision

Page 23 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 24: Oracle RAC 11g R1 On HP-UX

support environments because there is less resource competition. Because each application and server environment is different, you should test and verify that your environment benefits from the SCHED_NOAGE policy. When using SCHED_NOAGE, Oracle recommends that you exercise caution in assigning highest priority to Oracle processes. Assigning highest SCHED_NOAGE priority to Oracle processes can exhaust CPU resources on your system, causing other user processes to stop responding.

The RTSCHED and RTPRIO privileges grant Oracle the ability to change its process scheduling policy to SCHED_NOAGE and also tell Oracle what priority level it should use when setting the policy. The MLOCK privilege grants Oracle the ability to execute asynch I/Os through the HP asynch driver. Without this privilege, Oracle9i generates trace files with the following error message: "Ioctl ASYNCH_CONFIG error, errno = 1".As root, do the following:

¡ If it does not already exist, create the /etc/privgroup file. Add the following line to the file:dba MLOCK RTSCHED RTPRIO

¡ Use the following command syntax to assign these privileges:ksc/schalke# setprivgrp -f /etc/privgroup

l Create the /var/opt/oracle directory and make it owned by the oracle account. After installation, this directory will contain a few small text files that briefly describe the Oracle software installations and databases on the server. These commands will create the directory and give it appropriate permissions:ksc/schalke# mkdir /var/opt/oracle ksc/schalke# chown oracle:dba /var/opt/oracle ksc/schalke# chmod 755 /var/opt/oracle

l Create the following Oracle directories:¡ Local Home directory:

Oracle Clusterware: ksc/schalke# mkdir -p /opt/oracle/product/CRS Oracle RAC: ksc/schalke# mkdir -p /opt/oracle/product/RAC11gksc/schalke# chown -R oracle:dba /opt/oracleksc/schalke# chmod -R 775 /opt/oracle

l Set Oracle environment variables by adding an entry similar to the following example to each user startup .profile file for the Bourne or Korn shells, or .login file for the C shell. # @(#) $Revision: 72.2 $# Default user .profile file (/usr/bin/sh initialization).

# Set up the terminal:if [ "$TERM" = "" ]theneval ` tset -s -Q -m ':?hp' `elseeval ` tset -s -Q `fistty erase "^H" kill "^U" intr "^C" eof "^D"stty hupcl ixon ixofftabs

# Set up the search paths:PATH=$PATH:.

# Set up the shell environment:set -utrap "echo 'logout'" 0

# Oracle Environmentexport ORACLE_BASE=/opt/oracle/productexport ORACLE_HOME=$ORACLE_BASE/RAC11gexport ORA_CRS_HOME=$ORACLE_BASE/CRSexport ORACLE_SID=<SID>export ORACLE_TERM=xtermexport PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$PATH

Page 24 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 25: Oracle RAC 11g R1 On HP-UX

print ' 'print '$ORACLE_SID: '$ORACLE_SIDprint '$ORACLE_HOME: '$ORACLE_HOMEprint '$ORA_CRS_HOME: '$ORA_CRS_HOMEprint ' ' # Set up the shell variables:EDITOR=viexport EDITORexport PS1=`whoami`@`hostname`\['$ORACLE_SID'\]':$PWD$ 'REMOTEHOST=$(who -muR | awk '{print $NF}')export DISPLAY=${REMOTEHOST%%:0.0}:0.0 # ALIASalias psg="ps -ef | grep"alias lla="ll -rta"alias sq="ied sqlplus '/as sysdba'"alias oh="cd $ORACLE_HOME"alias ohbin="cd $ORACLE_HOME/bin"alias crs="cd $ORA_CRS_HOME"alias crsbin="cd $ORA_CRS_HOME/bin"

7.2 Check Cluster Configuration with Cluster Verification Utility

Cluster Verification Utility (Cluvfy) is a cluster utility introduced with Oracle Clusterware 10g Release 2. The wide domain of deployment of Cluvfy ranges from initial hardware setup through fully operational cluster for RAC deployment and covers all the intermediate stages of installation and configuration of various components.Cluvfy is provided with two scripts: runcluvfy.sh, which is designed to be used before installation, and cluvfy, which is in the path ORA_CRS_HOME/bin. The script runcluvfy.sh contains temporary variable definitions which enable it to be run before installing Oracle Clusterware or Oracle Database. After you install Oracle Clusterware, use the command cluvfy to check prerequisites and perform othersystem readiness checks.

Before Oracle software is installed, to enter a cluvfy command, change directories and start runcluvfy.sh using the following syntax:cd /mountpoint./runcluvfy.sh options With Cluvfy, you can either

l check the status for a specific component

cluvfy comp -list Cluvfy displays a list of components that can be checked, and brief descriptions of how each component is checked.cluvfy comp -help Cluvfy displays detailed syntax for each of the valid component checks.

or

l check the status of your cluster/systems at a specific point (= stage) during your RAC installation.

cluvfy stage -list Cluvfy displays a list of valid stages.cluvfy stage -help Cluvfy displays detailed syntax for each of the valid stage checks.

l Example1: Checking Network Connectivity among all cluster nodes:ksc$ <OraStage>/clusterware/runcluvfy.sh comp nodecon -n ksc,schalke [-verbose]

l Example 2: Performing Performing pre-checks for cluster services setupksc$ <OraStage>/clusterware/runcluvfy.sh stage -pre crsinst -n ksc,schalke [-verbose]

Page 25 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 26: Oracle RAC 11g R1 On HP-UX

8. Install Oracle Clusterware

This section describes the procedures for using the Oracle Universal Installer (OUI) to install Oracle Clusterware.

For our installation shown here, we chose ASM over RAW.

» Login as Oracle User and set the ORACLE_HOME environment variable to the Oracle Clusterware Home directory. Then start the Oracle Universal Installer from Clusterware mount directory by issuing the command

$ export ORACLE_HOME=/opt/oracle/product/CRS$ ./runInstaller &Ensure that you have the DISPLAY set.

» At the OUI Welcome screen, click Next.

» If you are performing this installation in an environment in which you have never installed Oracle database software then the OUI displays the Specify Inventory Directory and Credentials page.

Enter the inventory location and oinstall as the UNIX group name information into the Specify Inventory Directory and Credentials page, click Next.

» The Specify Home Details Page lets you enter the Oracle Clusterware home name and its location in the target destination.

Note that the Oracle Clusterware home that you identify in this phase of the installation is only for Oracle Clusterware software; this home cannot be the same home as the home that you will use in phase two to install the Oracle Database software with RAC.

Page 26 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 27: Oracle RAC 11g R1 On HP-UX

» Next, the Product-Specific Prerequisite Checks screen comes up. The installer verifies that

your environment meets all minimum requirements for installing and configuring Oracle Clusterware. Most probably you'll see a warning at step "Checking recommended operating system patches" as some patches already got replaced by newer ones.

» In the next Specify Cluster Configuration screen you can specify the cluster name as well as

the node information. If SG/SGeRAC is installed, then you'll see the cluster configuration provided by SG/SGeRAC. Otherwise, you must configure the nodes on which to install Oracle Clusterware.

Check and correct in necessary the entries for private node name and virtual host name. The private node names determine which private interconnect will be used by CSS. Provide exactly one name that maps to a private IP address. CSS cannot use multiple private interconnects for its communication hence only one name or IP address can be specified per node.

Page 27 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 28: Oracle RAC 11g R1 On HP-UX

» In the Specify Network Interface Usage page the OUI displays a list of cluster-wide network

interfaces. You can click edit to change the classification of the interfaces as Public, Private, or Do Not Use. You must classify at least one interconnect as Public and one as Private.

Here, Private determines which private interconnect will be used by the RAC instances. It's equivalent to setting the init.ora CLUSTER_INTERCONNECTS parameter, but is more convenient because it is a cluster-wide setting that does not have to be adjusted every time you add nodes or instances. RAC will use all of the interconnects listed as private in this screen, and they all have to be up, just as their IP addresses have to be when specified in the init.ora paramter. RAC does not fail over between cluster interconnects; if one is down then the instances using them won't start.

Page 28 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 29: Oracle RAC 11g R1 On HP-UX

» When you click Next, the OUI will look for the Oracle Cluster Registry file ocr.loc in

the /var/opt/oracle directory. If the ocr.loc file already exists, and if the ocr.loc file has a valid entry for the Oracle Cluster Registry (OCR) location, then the Voting Disk Location page appears and you should proceed to next Step. Otherwise, the Oracle Cluster Registry Location page appears.

Enter a the complete path for the Oracle Cluster Registry file (not only directory but also including filename). Depending on your chosen deployment model, this might be a shared raw volume or a shared disk (you always need the character device --> rocr.. ). In our example, we mapped the raw device to /dev/oracle/ocr as described in chapter 6.3.

Starting with 10g R2, you can let Oracle manage redundancy for this OCR file. In this case, you need to give 2 OCR locations. Assuming the file system has redundancy, e.g. disk array LUNs or CVM mirroring, use of External Redundancy is sufficient and no need for Oracle Clusterware to manage redundancy. Besides, please ensure to place the OCRs on the different file systems for HA reasons.

Page 29 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 30: Oracle RAC 11g R1 On HP-UX

» On the Voting Disk Page, enter a complete path and file name for the file in which you want to

store the voting disk. Depending on your chosen deployment model, this might be a shared raw volume (rxxx) or a shared disk (/dev/rdisk/disk25).

In our example, we mapped the raw device to /dev/oracle/vote as described in chapter 6.3.

Starting with 10g R2, you can let Oracle manage redundancy for the Oracle Voting Disk file. In this case, you need to give 3 locations. Assuming the file system has redundancy, e.g. disk array LUNs or CVM mirroring, use of External Redundancy is sufficient and no need for Oracle Clusterware to manage redundancy. Besides, please ensure to place the Voting Disk files on different file systems for HA reasons.

» Next, Oracle displays a Summary page. Verify that the OUI should install the components shown

on the Summary page and click Install.

Page 30 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 31: Oracle RAC 11g R1 On HP-UX

During the installation, the OUI first copies software to the local node and then copies the software to the remote nodes.

» Then the OUI displays the following windows indicating that you must run the script root.sh on all

nodes. The scripts root.sh prepares OCR and Voting Disk and starts the Oracle Clusterware. Only start another session of root.sh on another node after the previous root.sh execution completes; do not execute root.sh on more than one node at a time.

ksc# /opt/oracle/product/CRS/root.shWARNING: directory '/opt/oracle/product' is not owned by rootWARNING: directory '/opt/oracle' is not owned by rootWARNING: directory '/opt' is not owned by rootChecking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory

Page 31 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 32: Oracle RAC 11g R1 On HP-UX

Setting up Network socket directoriesOracle Cluster Registry configuration upgraded successfullyThe directory '/opt/oracle/product' is not owned by root. Changing owner to rootThe directory '/opt/oracle' is not owned by root. Changing owner to rootThe directory '/opt' is not owned by root. Changing owner to rootSuccessfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node <nodenumber>: <nodename> <private interconnect name> <hostname>node 1: ksc ksc-priv kscnode 2: schalke schalke-priv schalkeCreating OCR keys for user 'root', privgrp 'sys'..Operation successful.Now formatting voting device: /dev/oracle/vote1Now formatting voting device: /dev/oracle/vote2Now formatting voting device: /votedisk/vote3Format of 3 voting devices complete.Startup will be queued to init within 30 seconds.Adding daemons to inittabExpecting the CRS daemons to be up within 600 seconds.Cluster Synchronization Services is active on these nodes.kscCluster Synchronization Services is inactive on these nodes.schalkeLocal node checking complete. Run root.sh on remaining nodes to start CRS daemons.

schalke# /opt/oracle/product/CRS/root.shWARNING: directory '/opt/oracle/product' is not owned by rootWARNING: directory '/opt' is not owned by rootChecking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directorySetting up Network socket directoriesOracle Cluster Registry configuration upgraded successfullyThe directory '/opt/oracle/product' is not owned by root. Changing owner to rootThe directory '/opt' is not owned by root. Changing owner to rootclscfg: EXISTING configuration version 4 detected.clscfg: version 4 is 11 Release 1.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node <nodenumber>: <nodename> <private interconnect name> <hostname>node 1: ksc ksc-priv kscnode 2: schalke schalke-priv schalkeclscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.-force is destructive and will destroy any previous clusterconfiguration.Oracle Cluster Registry for cluster has already been initializedStartup will be queued to init within 30 seconds.Adding daemons to inittabExpecting the CRS daemons to be up within 600 seconds.Cluster Synchronization Services is active on these nodes.kscschalkeCluster Synchronization Services is active on all the nodes.Oracle CRS stack installed and running under init(1M)Running vipca(silent) for configuring nodeapps

Creating VIP application resource on (2) nodes...Creating GSD application resource on (2) nodes...Creating ONS application resource on (2) nodes...Starting VIP application resource on (2) nodes...Starting GSD application resource on (2) nodes...Starting ONS application resource on (2) nodes...

Done.

As highlighted in red, Oracle configures the NodeApps at the end of the last root.sh script execution in silent mode. You can see the Oracle VIP as follows: ksc# netstat -in Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll lan2:801 1500 15.136.24.0 15.136.28.31 0 0 0 0 0 lan2 1500 15.136.24.0 15.136.28.1 831 0 518 0 0

Page 32 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 33: Oracle RAC 11g R1 On HP-UX

lan9000 2044 10.0.0.0 10.0.0.1 1593 0 2021 0 0 lo0 32808 127.0.0.0 127.0.0.1 1866 0 1866 0 0

» Next, the Configurations Assistants screen comes up. OUI runs the Oracle Notification Server Configuration Assistant, Oracle Private Interconnect Configuration Assistant, and Cluster Verification Utility. These programs run without user intervention.

» When the OUI displays the End of Installation page, click Exit to exit the Installer.

» Verify your CRS installation by executing the olsnodes command from the

Page 33 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 34: Oracle RAC 11g R1 On HP-UX

9. Installation of and Creation of Oracle Database RAC 11g

This part describes phase two of the installation procedures for installing the Oracle Database 11g with Real Application Clusters (RAC).

For our installation shown here, we chose ASM over RAW.

$ORA_CRS_HOME/bin directory:# olsnodes -nksc 1schalke 2

» Now you should see the following processes running:

l oprocd -- Process monitor for the cluster. Note that this process will only appear on platforms that do not use HP Serviceguard with CSS.

l evmd -- Event manager daemon that starts the racgevt process to manage callouts.l ocssd -- Manages cluster node membership and runs as oracle user; failure of this process

results in cluster restart.l crsd -- Performs high availability recovery and management operations such as maintaining

the OCR. Also manages application resources and runs as root user and restarts automatically upon failure.

You can check whether the Oracle processes evmd, occsd, and crsd are running by issuing the following command.# ps -ef | grep d.bin

»At this point, you have completed phase one, the installation of Cluster Ready Services

» Please note that Oracle added the following three lines to the automatic startup file /etc/inittab.

h1:3:respawn:/sbin/init.d/init.evmd run >/dev/null 2>&1 </dev/nullh2:3:respawn:/sbin/init.d/init.cssd fatal >/dev/null 2>&1 </dev/nullh3:3:respawn:/sbin/init.d/init.crsd run >/dev/null 2>&1 </dev/null

Oracle Support recommends NEVER modifying these entries in the inittab or modifying the init scripts unless you use this method to stop a reboot loop or are given explicit instructions from Oracle support.

» To ensure that the Oracle Clusterware install on all the nodes is valid, the following should be checked on all the nodes:

l $ $ORA_CRS_HOME/bin/crsctl check crs

» Login as Oracle User and set the ORACLE_HOME environment variable to the Oracle Home directory. Then start the Oracle Universal Installer from Disk1 by issuing the command

$ export ORACLE_HOME=/opt/oracle/product/RAC11g$ ./runInstaller &

Ensure that you have the DISPLAY set.

Page 34 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 35: Oracle RAC 11g R1 On HP-UX

» As we want to use ASM, we have to select Advanced Installation.

» Select Enterprise Edition and click Next.

» The Oracle home name and path that you use in this step must be different from the home that

you used during the Oracle Clusterware installation in phase one.

Page 35 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 36: Oracle RAC 11g R1 On HP-UX

» At the Specify Hardware Cluster Installation Mode screen select Cluster Installation with both

nodes 'ksc' and 'schalke' and click Next.

» Next, the Product-Specific Prerequisite Check screen comes up. The installer verifies that your environment meets all minimum requirements for installing and configuring a RAC11g database. Most probably you'll see a warning at step "Checking recommended operating system patches" as some patches already got replaced by newer ones.

» On the Select Configuration Option page you can choose to either create a database, to

configure Oracle ASM or to perform a software only installation.

You can install ASM into an own ORACLE_HOME to be decoupled from the database binaries. If you would like to do this, you need to select Oracle ASM.

Page 36 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 37: Oracle RAC 11g R1 On HP-UX

For simplicity, we select in this cookbook 'Create a Database' and click Next.

» Select the General Purpose template for your cluster database, and click Next.

» At the Database Identification page enter the global database name and the Oracle system

identifier (SID) prefix for your cluster database and click Next.

Page 37 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 38: Oracle RAC 11g R1 On HP-UX

» Specify Database Config Details such as Automatic Memory Management and click Next.

» On the Management Options page, you can choose to manage your database with Database

Control.

Page 38 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 39: Oracle RAC 11g R1 On HP-UX

» At the Database Storage Option page you can select a storage type for the database. Please

select the storage option that applies to your chosen deployment model.

Here, we illustrate an installation with Oracle ASM:

As we do not want to use a separate ORACLE_HOME for ASM, we just continued with the installation.

» Here, you can specify Backup & Recovery Options. As this is out of scope of this cookbook, we have not to enable Automated backups.

Page 39 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 40: Oracle RAC 11g R1 On HP-UX

» Now, you can configure Oracle ASM. Set the Disk Discovery Path to the right directory, where you

have prepared the disks for ASM.

Page 40 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 41: Oracle RAC 11g R1 On HP-UX

» At this page you can enter the passwords for your database. You can enter the same or different

passwords for the users SYS and SYSTEM, etc.

» Check the Group information and click Next.

Page 41 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 42: Oracle RAC 11g R1 On HP-UX

» Do not register for Oracle Configuration Manager.

» The Summary Page displays the software components that the OUI will install and the space

available in the Oracle home with a list of the nodes that are part of the installation session. Verify the details about the installation that appear on the Summary page and click Install or click Back to revise your installation.

Page 42 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 43: Oracle RAC 11g R1 On HP-UX

During the installation, the OUI copies software to the local node and then copies the software to the remote nodes.

» Next, the Configurations Assistants screen comes up. These programs run without user intervention.

» In a next step, the database will now be created and started.

Page 43 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 44: Oracle RAC 11g R1 On HP-UX

Then, OUI prompts you to run the root.sh script from the new RDBMS Home on all the selected nodes as root user.

» Congratulations ... you have now your RAC database configured :-)

» You can check the installation with the command OCR commands

$ORA_CRS_HOME/bin/ocrdump, $ORA_CRS_HOME/bin/ocrcheck, $ORA_CRS_HOME/bin/crs_stat. The crs_stat command will provide a description of the Oracle environment available in the cluster.

# crs-stat -t gives you a more compact output.

In addition we would recommend to copy the sample CRS resource status query script from the

Page 44 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 45: Oracle RAC 11g R1 On HP-UX

10. SGeRAC Toolkit for Oracle RAC

HP offers a Serviceguard toolkit specifically for deployments with SGeRAC and Oracle RAC. This toolkit is free of charge and comes with HP Serviceguard 11.18. For earlier versions, it can be download from http://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=SGeRAC-Tk. We highly encourage customers to use this toolkit for RAC deployments based on SGeRAC. This SGeRAC toolkit leverages the Multi-Node Package and Simple package dependency features introduced by HP Serviceguard A.11.17 and provides a uniform, easy-to-mange and intuitive method to coordinate the operation of this combined software stack, across the full range of storage management options supported by SGeRAC. For more details, please read the following technical white paper: http://docs.hp.com/en/8987/sgeractoolkit-wp.pdf.

11. Tips & Tricks

Oracle Clusterware:

l CRS and 10g Real Application Clusters; Oracle Metalink Note: 259301.1l How to start the 10g CRS ClusterWare; Oracle Metalink Note Note:309542.1l How to Clean Up After a Failed CRS Install; Oracle Metalink Note:239998.1l How to Stop the Cluster Ready Services (CRS); Oracle Metalink Note:263897.1 l Stopping Reboot Loops When CRS Problems Occur; Oracle Metalink Note: 239989.1l Troubleshooting CRS Reboots; Oracle Metalink Note:265769.1l CRS 10g Diagnostic Collection Guide; Oracle Metalink Note:272332.1l What Are The Default Settings For MISSCOUNT In 10g RAC ?, Oracle Metalink Note 300063.1l CSS Timeout Computation in 10g RAC 10.1.0.3; Oracle Metalink Note:294430.1 l HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC INSTANCE; Oracle Metalink

Oracle Metalink Note:259301.1:

#!/usr/bin/ksh## Sample CRS resource status query script## Description:# - Returns formatted version of crs_stat -t, in tabular# format, with the complete rsc names and filtering keywords# - The argument, $RSC_KEY, is optional and if passed to the script, will# limit the output to HA resources whose names match $RSC_KEY.# Requirements:# - $ORA_CRS_HOME should be set in your environment

RSC_KEY=$1QSTAT=-uAWK=/sbin/awk # if not available use /usr/bin/awkORA_CRS_HOME=/opt/oracle/product/CRS

# Table header:echo ""$AWK \'BEGIN {printf "%-45s %-10s %-18s\n", "HA Resource", "Target", "State";printf "%-45s %-10s %-18s\n", "-----------", "------", "-----";}'

# Table body:$ORA_CRS_HOME/bin/crs_stat $QSTAT | $AWK \'BEGIN { FS="="; state = 0; }$1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1};state == 0 {next;}$1~/TARGET/ && state == 1 {apptarget = $2; state=2;}$1~/STATE/ && state == 2 {appstate = $2; state=3;}state == 3 {printf "%-45s %-10s %-18s\n", appname, apptarget, appstate; state=0;}'

Page 45 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook

Page 46: Oracle RAC 11g R1 On HP-UX

Note:298073.1

VIPs / Interconnect / Public Interface:

l Configuring the HP-UX Operating System for the Oracle 10g VIP; Oracle Metalink Note:296874.1 l How to Configure Virtual IPs for 10g RAC; Oracle Metalink Note:264847.1l How to change VIP and VIP/Hostname in 10g ; Oracle Metalink Note:271121.1l Modifying the VIP of a Cluster Node; Oracle Metalink Note:276434.1 l How to Change Interconnect/Public Interface IP Subnet in a 10g Cluster; Oracle Metalink

Note:283684.1l Troubleshooting TAF Issues in 10g RAC; Oracle Metalink Note:271297.1l Oracle 10g VIP (Virtual IP) changes in Oracle 10g 10.1.0.4; Oracle Metalink Note:296878.1

OCR / Voting:

l How to Restore a Lost Voting Disk in 10g; Oracle Metalink Note:279793.1l Repairing or Restoring an Inconsistent OCR in RAC; Oracle Metalink Note:268937.1

ASM:

l ASM Instance Shuts Down Cleanly On Its Own; Oracle Metalink Note:277274.1

Migration:

l How to migrate from 9iRAC to RAC10; CTC Technical Paper

Adding/Removing Nodes:

l Adding a Node to a 10g RAC Cluster; Oracle Metalink Note: 270512.1l Removing a Node from a 10g RAC Cluster; Oracle Metalink Note:269320.1

12. Known Issues & Bug Fixes

still empty :-)

Page 46 of 46HP/Oracle CTC RAC11g R1 on HP-UX cookbook