deploying oracle real application clusters 11g r2 on red
TRANSCRIPT
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Infortrend, ESVA, EonStor, EonNAS, and EonPath are trademarks or registered
trademarks of Infortrend. All other marks and names mentioned herein may be trademarks of their respective owners. The information contained herein is
subject to change without notice. Content provided as is, without express or implied warranties of any kind.
Version: 1.0
Abstract:
This application note covers key operations in detail to help you better understand
how to install Oracle database and Real Application Cluster (RAC). By following the
steps described herein, you should be able to deploy EonStor DS and GS storage
devices in the RAC environment.
Deploying Oracle
Real Application Clusters 11g R2
on Red Hat Linux 6.x
Application Note
Table of Contents
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 2
Table of Contents
Overview ................................................................................................................................................ 3
Verifying System Requirement ................................................................................................................ 3
Operating System Requirement ................................................................................................................. 3
Server Hardware and Software Requirement ............................................................................................ 3
Network Requirement ............................................................................................................................... 4
Storage Configuration ............................................................................................................................. 4
Topology and Storage Configuration Sample ............................................................................................. 4
Storage Configuration Recommendation .................................................................................................. 5
Environment Setup ................................................................................................................................. 5
1. Configure Server Nodes ..................................................................................................................... 5
Enable Multipath ............................................................................................................................... 5
Define Every Server Node’s name...................................................................................................... 5
Edit Network Setting .......................................................................................................................... 6
2. Create Oracle Inventory Group .......................................................................................................... 6
3. Set the Password of Grid and Oracle User ......................................................................................... 7
4. Modify Environment Variables .......................................................................................................... 7
5. Modify Kernel Parameters ................................................................................................................. 8
6. Edit Login Parameters ........................................................................................................................ 8
7. Setting Network Time Protocol for Cluster Time Synchronization .................................................... 9
Edit ntp.conf ....................................................................................................................................... 9
Check the NTP Status ......................................................................................................................... 9
Start to Synchronize Time ................................................................................................................ 10
8. UDEV SCSI Configuration for ASM Disks .......................................................................................... 10
9. Install Grid Infrastructure ................................................................................................................ 11
10. Create ASM Instances .................................................................................................................. 16
11. Install Oracle Package .................................................................................................................. 17
12. Create Oracle Database ............................................................................................................... 21
Summary ............................................................................................................................................... 26
Reference .............................................................................................................................................. 27
Overview
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 3
Overview
The Oracle Real Application Clusters (RAC) is a clustered version of database based on a high availability
infrastructure that harnesses the processing power of multiple database instances on different server nodes
with a shared storage pool. All the servers on the cluster can actively access the database files on the
shared storage pool and handle various application workloads simultaneously, and all database workloads
are evenly distributed to all the servers. RAC manages and synchronizes all read and write behavior in the
cluster and makes sure all data is coordinated among server nodes. In addition, in the case of one or more
server failures, the database connections and workloads on the failed nodes will fail over to the rest of the
servers and continue to provide database services.
Verifying System Requirement
Operating System Requirement
Make sure your operating system is supported by Oracle (check https://support.oracle.com), and each
mode in the cluster must run the same operating system.
Server Hardware and Software Requirement
Component Requirement
Chip Architecture All servers must have the same
chip architecture, e.g. all 32-bit
processors or all 64-bit processors.
CPU CPU is certified by Oracle.
Physical Memory At least 1.5 GB.
Swap Space Equal to the amount of RAM.
Temporary Space (/tmp) At least 1 GB.
Resolution for Monitor Display A minimum of 1024 x 786.
Storage Configuration
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 4
Network Requirement
Ensure each node has at least three network interface cards (NICs) available: one NIC for public
network and two NICs for the private network traffic.
Public interface names must be the same on all nodes and should be able to communicate with all
nodes.
Private interface names must be the same on all nodes and should be able to communicate with all
nodes.
Storage Configuration
Topology and Storage Configuration Sample
DS 1012RE Topology and Configuration Sample
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 5
Logical Drive
Pool
Server Node 1
Controller A
Channel AccessiSCSI 10Gb/s HB
Ch4
6x 3TB NL-SAS HDD
100GB Volume
Channel AccessiSCSI 10Gb/s HB
Ch5
Logical Drive
Pool
Controller B
6x 3TB NL-SAS HDD
Channel AccessiSCSI 10Gb/s HB
Ch4
Channel AccessiSCSI 10Gb/s HB
Ch5
Server Node 2
100GB Volume
100GB Volume
100GB Volume
100GB Volume
100GB Volume
100GB Volume
100GB Volume
Switch 1
Switch 2
Cluster
GS 3012R Topology and Configuration Sample
Storage Configuration Recommendation
If you are configuring LDs for a redundant controller system, you can equally assign LDs to both controllers
so that the computing power of the partner controllers can be fully utilized. For example, if you have 2 LDs,
you can assign 1 LD to controller A and the other to controller B. In addition, if one controller fails in a
redundant controller system, the failover process takes only a few seconds and is transparent to users.
Environment Setup
1. Configure Server Nodes
Enable Multipath
Ensure that all your server nodes in the cluster have been updated and include the
device-mapper-multipath package, create or edit /etc/multipathd.config to enable multipath service, and
restart the service. For more information, please refer to Enabling Multi-pathing on EonStor DS with Red
Hat Enterprise Linux 6 Device Mapper.
Define Every Server Node’s name
Even if you are using a DNS, Oracle recommends that you add lines to the /etc/hosts file on each node,
specifying the public IP addresses. Configure the /etc/hosts file so that it is similar to the following example:
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 6
Edit Network Setting
On Server Node 1:
# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=node01
NETWORKING_IPV6=no
On Server Node 2:
# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=node02
NETWORKING_IPV6=no
2. Create Oracle Inventory Group
It is recommended to create separate groups and users and divide access privileges by job roles, such as
software installer, database owner or operator, administrator, and so on. Even though these roles are often
combined or may not be required at this moment, it is still suggested to create them in the beginning.
Log in as root on every node, create the following group (by using groupadd command), user accounts (by
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 7
using useradd command) and group assignments. According to Oracle documentation, a user created to
own only Oracle Grid Infrastructure software installations is called the grid user. A user created to own
either all Oracle installations, or only Oracle database installations, is called the oracle user.
The -g option specifies the primary group, which must be the Oracle Inventory group. The -u option
specifies the user ID. You must note the user ID number because you need it during pre-installation.
The -G option specifies the secondary groups, which must include the OSDBA group.
# /usr/sbin/groupadd -g 501 oinstall
# /usr/sbin/groupadd -g 502 dba
# /usr/sbin/groupadd -g 503 oper
# /usr/sbin/groupadd -g 504 asmadmin
# /usr/sbin/groupadd -g 505 asmoper
# /usr/sbin/groupadd -g 506 asmdba
# /usr/sbin/useradd -u 501 -g oinstall -G dba,asmdba,asmadmin,asmoper grid
# /usr/sbin/useradd -u 502 -g oinstall -G dba,oper,asmdba oracle
Note that the user performing the Oracle RAC installation must belong to the Oracle Inventory group and
the OSDBA group (typically oinstall and dba). If this is not the case, then the installation will fail.
3. Set the Password of Grid and Oracle User
Set the password of the user that will own Oracle Grid Infrastructure and password of the Oracle user.
# Passwd grid
# Passwd oracle
4. Modify Environment Variables
Modify the user’s shell file in any text editor: vi /home/grid/.bash_profile.
Following is the node 1 configuration sample and you have to set variables for all server nodes to meet your
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 8
specific environment. For node 2, follow most of node 1 settings but you need to set export
ORACLE_SID=+ASM2.
5. Modify Kernel Parameters
Oracle 11g R2 requires the kernel parameter settings for all server nodes shown below by modifying file vi
/etc/sysctl.conf and add the following lines to satisfy the Oracle installer's requirements. The values given
are minimums, so if your system uses a larger value, do not change it.
To activate the new kernel parameter values for the currently running system, you have to run command
sysctl -p as root on all Oracle RAC nodes in the cluster.
6. Edit Login Parameters
To improve overall performance, you must increase the shell limits for the oracle user. Add the following
lines to the /etc/pam.d/login file: session required /lib64/security/pam_limits.so
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 9
7. Setting Network Time Protocol for Cluster Time Synchronization
Oracle RAC requires the same time zone environment variable setting on all cluster nodes and the time
zone default is used for databases, Oracle ASM, and any other managed processes. You can synchronize the
system clock with a remote server over the Network Time Protocol (NTP).
Edit ntp.conf
The NTP program is configured by editing /etc/ntp.conf. An example of ntp.conf would be as follows:
restrict 127.0.0.1: Add the line to allow unrestricted access from the local host.
fudge: Using fudge to say that the local clock is stratum 10 makes ntp use the local clock when no
timeservers are available. Stratum levels define the distance from the reference clock. A server
synchronized to a stratum n server will be running at stratum n + 1, so a stratum-0 device that is assumed to
be accurate and has little or no delay associated with it.
Server <IP Address>: Set local system as a timeserver.
Check the NTP Status
Using ntpq command checks NTP status. If you get any connection refused errors then the time server is
not responding or the NTP daemon/port is not started or listening.
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 10
Start to Synchronize Time
You can force ntpd to synchronize on service startup by modifying /etc/sysconfig/ntpd as the following
example.
On Server Node 1:
# vi /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
SYNC_HWCLOCK=no
NTPDATE_OPTIONS=""
Then, restart the NTP service:
# /sbin/service ntpd restart
On Server Node 2:
Run crontab to schedule the time of synchronization.
# crontab -l
*/1 * * * * /usr/sbin/ntpdate 20.20.0.1
/1 : check with node 1 every 1 minute.
/usr/sbin/ntpdate <IP Address>: use ntpdate to do time correction with targeted NTP server.
8. UDEV SCSI Configuration for ASM Disks
For Oracle Automatic Storage Manager (ASM) to use storage disks, it needs to be able to identify those
devices consistently and configure them to have correct ownership and permissions. You can use the Linux
device manager “udev” to do configuration. What udev does is to apply rules defined in files in the
"/etc/udev/rules.d" directory to the device nodes listed in the "/dev" directory.
for i in b c d e f g h i;
do
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 11
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u -s %p\", RESULT==\"`scsi_id -g
-u -s /block/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""
done
for i in b c d e f g h i; do done: run loop command among do and done for 8 times.
"KERNEL==\"sd*\", BUS==\"scsi\": identify all the devices’ name containing “sd” and tell what we want
udev to do with it.
PROGRAM==\"/sbin/scsi_id -g -u -s %p\", RESULT==\"`scsi_id -g -u -s /block/sd$i`\": test each device that
matches the previous pattern to see if it is the disk that is identified, and return every matched device’s SCSI
ID.
NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" : create name and
define permission and ownership.
Then, restart the UDEV service as follows:
# cd /etc/udev/rules.d/99-oracle-asmdevices.rules
# /sbin/start_udev
# ls -l /dev/asm*
/sbin/start_udev: restart udev service.
ls -l /dev/asm*: check the disks are now available with the "asm*" alias and the correct ownership and
permissions.
Then, copy files “/etc/udev/rules.d/99-oracle-asmdevices.rules” to other server nodes.
9. Install Grid Infrastructure
On Server Node 1
Login as a grid user to download Oracle Database 11g Release 2 and Oracle Grid Infrastructure 11g Release
2 for Linux from the link below and unzip all files:
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linx8664soft-10057
2.html
On Server Node 2
Unzip Oracle Grid Infrastructure 11g R2
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 12
Login as root on both nodes
# cd /u01/grid/rpm
# rpm -iv cvuqdisk-1.0.9-1.rpm
cd /u01/grid/rpm: Locate the cvuqdisk RPM package, which is in the directory rpm on the installation
media.
rpm -iv cvuqdisk-1.0.9-1.rpm: use this command to install the cvuqdisk package.
# xhost +
# su - grid
# cd /u01/grid
# ./runInstaller
xhost +: allow user to open an X window.
su – grid: log in as grid user.
cd /u01/grid: go to your target folder.
./runInstaller: start installation process.
On the first screen of the installer, select “Install and Configure Grid Infrastructure for a Cluster.”
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 13
Select Typical Installation
On the "Specify Cluster Configuration" screen, enter the SCAN name and click the "Add" button to add the
details of the second node in the cluster, and then click the "OK" button.
Note: The SCAN is a name used for virtual IP addresses.
Click the "SSH Connectivity..." button and enter the password for the "oracle" user. Click the "Setup" button
to configure SSH connectivity, and the "Test" button to test it once it is completed.
Click the "Identify network interfaces..." button and check the public and private networks are specified
correctly. Then click the "OK" button and the "Next" button on the previous screen.
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 14
Enter "/u01/app/11.2.0.3/grid" as the software location and "Automatic Storage Manager" as the cluster
registry storage type. Enter the ASM password and click the "Next" button.
Set the redundancy to "External." Select all 4 disks and click the "Next" button.
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 15
Check with configuration summary and click the "Finish" button.
The installer will start to install product.
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 16
When prompted, run the following scripts on each node.
On both Server Node 1 and Node 2:
run /us01/app/oraInventory/orainstRoot.sh
run /us01/app/11.2.0/grid/root.sh
On Server Node 1:
Run crs_stat –t to check all resource status.
10. Create ASM Instances
Oracle ASM Configuration Assistant (ASMCA) supports installing and configuring Oracle ASM instances, disk
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 17
groups, volumes, and Oracle Automatic Storage Management Cluster File System. To start ASMCA, log in as
grid and enter asmca at a command prompt inside the Oracle Grid Infrastructure home.
# su - grid
$ asmca
The GUI tool displays and attempts to connect to the Oracle ASM instance identified with the Oracle system
identifier (SID) set to +ASM.
Oracle ASM Configuration Assistant enables you to configure or create Oracle ASM disk groups with
the Disk Groups tab. The disk group tab displays selected details about the disk group, such as name, size,
free space, usable space, redundancy level, and state of the disk group. You can create a disk group by
clicking Create.
11. Install Oracle Package
Log in as Oracle user and run the installer to install Oracle.
# su – oracle
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 18
# cd /uss01/database
#./runInstall
In the Select Installation Option window, select install database software only.
In the Node selection page, select all the nodes to install Real Application Clusters database.
Select preferred product languages.
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 19
Select Database edition.
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 20
In the installation location window, specify the location of Oracle Base and software files.
In the Privileged Operating System Groups window, select dba for Database Administrator (OSDBA) Group
and oper for Database Operator (OSOPER) Group and click Next.
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 21
Check configuration summary and click Finish.
12. Create Oracle Database
During installation, you can direct the installer to create and configure a new database by launching
Database Configuration Assistant (DBCA).
To start the DBCA on Server Node 1, enter the command dbca from the ORACLE_HOME/bin.
$ dbca
After the welcome page appears, select Oracle Real Application Cluster database and click Next.
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 22
Select Create a Database.
Based on your requirement, select General Purpose or Transaction Processing, Custom Database, or Data
Warehouse.
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 23
In database identification page, select Admin-Managed or Policy-Managed, define database name and SID
Prefix, and select the nodes in the clusters.
In Management Options page, set up your database so it can be managed with Oracle Enterprise Manager.
Oracle Enterprise Manager provides Web-based management tools for individual or multiple databases.
Select Configure Database Control for local management to manage your database locally. If you choose
this option, you can additionally check Enable Alert Notifications for Oracle to e-mail you alerts regarding
potential problems, and check Enable Daily Backup to Recovery Area to set up daily backup schedule.
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 24
For security reasons, you must specify passwords by setting passwords for different administrators or using
the same password for all accounts.
In the storage type, we have 2 options, one is file system and the other is ASM automatic storage
management. We recommend you choose ASM. In the Storage Locations, you have to specify the location of
storage for the database from templates / common location for all Database files/ Use Oracle-Managed Files).
In Recovery Configuration, you can choose the recovery options for the database:
Specify Flash Recovery Area: specify a backup and recovery area and specify its directory location and size.
Enable Archiving: enable archiving of database redo logs, which can be used to recover a database. You can
accept the default archive mode settings or change them by selecting Edit Archive Mode Parameters.
Environment Setup
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Page 25
In Initialization Parameters, based on requirement, you can set parameters for memory management,
specify the smallest block size and the maximum number of operating system user processes that can
simultaneously connect to the database, and define the character sets used by your database, and select
connection mode for dedicated server mode or shared storage mode.
Summary
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Infortrend, ESVA, EonStor, EonNAS, and EonPath are trademarks or registered
trademarks of Infortrend. All other marks and names mentioned herein may be trademarks of their respective owners. The information contained herein is
subject to change without notice. Content provided as is, without express or implied warranties of any kind.
In Database Storage window, there is a navigation tree displaying the storage structure of your database
(control files, data files, redo log groups, and so forth). If you are not satisfied with the storage structure or
parameters, you can make changes. You can create a new object with Create and delete existing objects
with Delete.
In the Creation Options window, click Finish.
In the Summary window, click Ok to create database. Click Exit on the Database Configuration Assistant
window after the database creation is completed.
Summary
After working through the steps and parameter settings, you should have a better understanding and be
able to install Oracle RAC and deploy EonStor DS and GS family for your database environment. We
recommend reading the reference documents available on the Oracle website if you wish to have a more
comprehensive understanding about Oracle database.
Reference
Copyright © 2016 Infortrend Technology, Inc. All rights reserved. Infortrend, ESVA, EonStor, EonNAS, and EonPath are trademarks or registered
trademarks of Infortrend. All other marks and names mentioned herein may be trademarks of their respective owners. The information contained herein is
subject to change without notice. Content provided as is, without express or implied warranties of any kind.
Reference
Oracle Database
https://www.oracle.com/database/index.html
Oracle Real Application Clusters
https://www.oracle.com/database/real-application-clusters/index.html
11g Release 2 Database Installation Guide:
https://docs.oracle.com/cd/E11882_01/install.112/e49316/toc.htm