practicing solaris cluster using virtualbox -...

42
Practicing Solaris Cluster using VirtualBox Example configuration to run a training and development cluster environment on a single system Combining technologies to work Thorsten Frueauf, 10/16/2009 This white paper describes how to configure a training and de- velopment environment for Solaris 10 and Solaris Cluster 3.2 on a physical system running OpenSolaris, using technologies like VirtualBox, software quorum, Solaris Container Clusters (Zone Clusters), Crossbow, IPsec and COMSTAR iSCSI.

Upload: trannhi

Post on 06-Feb-2018

275 views

Category:

Documents


8 download

TRANSCRIPT

Page 1: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Practicing Solaris Cluster

using VirtualBox

Example configuration to run a training and development

cluster environment on a single system

Combining technologies to work

Thorsten Frueauf, 10/16/2009

This white paper describes how to configure a training and de-velopment environment for Solaris 10 and Solaris Cluster 3.2 on a physical system running OpenSolaris, using technologies like VirtualBox, software quorum, Solaris Container Clusters (Zone Clusters), Crossbow, IPsec and COMSTAR iSCSI.

Page 2: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Table of Contents

1 Introduction............................................................................................................. 3

2 Host Configuration.................................................................................................. 4

2.1 BIOS Configuration.................................................................................................4

2.2 OpenSolaris Configuration......................................................................................4

2.2.1Network Configuration............................................................................................ 4

2.2.2Filesystem Configuration........................................................................................ 7

2.2.3COMSTAR / iSCSI Target Configuration.................................................................8

2.3 Install VirtualBox..................................................................................................... 9

2.4 Install rdesktop........................................................................................................9

2.5 Download Solaris 10 05/09 (Update 7) ISO image.................................................9

2.6 Download Solaris Cluster 3.2 01/09 archive...........................................................9

3 VirtualBox Configuration....................................................................................... 10

3.1 VirtualBox Guest Configuration.............................................................................10

3.1.1Virtual Disk Configuration......................................................................................11

3.1.2Virtual Machine Configuration............................................................................... 11

3.2 VirtualBox Guest Solaris Configuration.................................................................13

3.2.1First Guest Installation (S10-U7-SC-32U2-1)....................................................... 13

3.2.2Second Guest Installation (S10-U7-SC-32U2-2).................................................. 15

3.3 Getting Crash dumps from Solaris guests............................................................ 18

3.3.1Booting Solaris with kernel debugger enabled..................................................... 18

3.3.2How to break into the kernel debugger.................................................................18

3.3.3Forcing a crash dump........................................................................................... 19

3.3.4Crash dump analysis with Solaris CAT.................................................................19

4 Solaris Cluster Configuration................................................................................ 20

4.1 Solaris Cluster Installation.................................................................................... 21

4.1.1First node cluster installation (s10-sc32-1)........................................................... 21

4.1.2First node cluster configuration (s10-sc32-1)....................................................... 22

4.1.3Second node cluster installation (s10-sc32-2)......................................................23

4.1.4Second node cluster configuration (s10-sc32-2).................................................. 23

4.2 iSCSI Initiator Configuration................................................................................. 24

4.3 ZFS zpool Configuration for Data......................................................................... 24

4.4 Software Quorum Configuration........................................................................... 25

4.5 IPsec Configuration for the cluster interconnect................................................... 25

4.6 Zone Cluster Configuration...................................................................................28

4.6.1First Zone Cluster Configuration (zc1)..................................................................28

4.6.2Second Zone Cluster Configuration (zc2).............................................................30

4.7 Resource Group and HA ZFS Configuration (zc1)...............................................32

4.8 HA MySQL Configuration (zc1).............................................................................33

4.9 HA Tomcat Configuration (zc1)............................................................................. 39

4.10 Scalable Apache Configuration (zc2)....................................................................40

A References........................................................................................................... 42

Page 3: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 3 / 42

1 Introduction

For developers it is often convenient to have all tools necessary for their work in one place, ideally on a laptop for maximum mobility.

For system administrators, it is often critical to have a test system on which to try out things and learn about new features. Of course the system needs to be low cost and transportable to anywhere they need to be.

HA Clusters are often perceived as complex to setup and resource hungry in terms of hardware re-quirements.

This white paper explains how to setup a single x86 based system (like a laptop) with OpenSolaris, configuring a training and development environment for Solaris 10 / Solaris Cluster 3.2 and using VirtualBox to setup a two node cluster. The configuration can then be used to practice various tech-nologies:

OpenSolaris technologies like Crossbow (to create virtual networking adapters), COMSTAR (to ex-port iSCSI targets from the host being used as iSCSI initiators by the Solaris Cluster nodes as shared storage and quorum device), ZFS (to export a ZFS volume as iSCSI targets and as failover file system within the cluster) and IPsec (to secure the cluster private interconnect traffic) are used for the host system and VirtualBox guests to configure Solaris 10 / Solaris Cluster 3.2.

Solaris Cluster technologies like software quorum and zone clusters are getting used to setup HA MySQL and HA Tomcat as failover services running in one virtual cluster. A second virtual cluster is being used to show how to setup Apache as a scalable service. The instructions can be used as a step-by-step guide to setup any x86 based system that is capable to run OpenSolaris. In order to try out if your system works, simply boot the OpenSolaris live CD-ROM and confirm with the Device Driver Utility (DDU) that all required components are able to run. The hardware compatibility list can be found at http://www.sun.com/bigadmin/hcl/.

1 Introduction

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 4: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 4 / 42

2 Host Configuration

The example host system used throughout this white paper is a Toshiba Tecra® M10 Laptop with the following hardware specifications:

• 4 GB main memory• Intel® Coretm2 Duo [email protected]• 160 GB SATA hard disk• 1 physical network nic (1000 Mbit) – e1000g0• 1 wireless network nic (54 Mbit) – iwh0

The system should at least have a minimum of 3GB of main memory in order to host two VirtualBox OpenSolaris guest systems.

2.1 BIOS Configuration

The Toshiba Tecra M10 has been updated to the BIOS version 2.0. By default, the option to use the CPU virtualization capabilities is disabled.This option needs to be enabled in order to use 64bit guests with VirtualBox:

BIOS screen SYSTEM SETUP (1/3) → OTHERSSet “Virtualization Technology” to “Enabled”.

2.2 OpenSolaris Configuration

In this example OpenSolaris 2009.06 build 111 has been installed on the laptop.

For generic information on how to install OpenSolaris 2009.06, see the official guide athttp://dlc.sun.com/osol/docs/content/2009.06/getstart/index.html.

The following configuration choices will be used as an example:

• Hostname: vorlon• User: scdemo

2.2.1 Network Configuration

By default OpenSolaris enables the Network Auto-Magic (NWAM) service.

Since NWAM is currently designed to use only one active NIC at a time (and actively unconfigures all other existing NICs), the following steps are required to disable NWAM and setup a static net-working configuration. The diagram shows an overview of the target network setup:

2 Host Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 5: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 5 / 42

The following IP addresses will be used:

IP Address Alias Comment

10.0.2.100 vorlon-int vnic11

10.0.2.121 s10-sc32-1 e1000g0 / vnic12

10.0.2.122 s10-sc32-2 e1000g0 / vnic13

10.0.2.130 s10-sc32-lh1

10.0.2.131 s10-sc32-lh2

10.0.2.140 zc1-z1

10.0.2.141 zc1-z2

10.0.2.142 zc2-z1

10.0.2.143 zc2-z2

2 Host Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

VirtualBox guests10-sc32-1

Solaris 10 05/09 (Update 7)

e1000g0

e1000g1

VirtualBox guests10-sc32-2

Solaris 10 05/09 (Update 7)

e1000g1

e1000g0

vnic22

vnic13

vnic12

vnic21

e1000g0 vnic11

Laptopvorlon

OpenSolaris 2009.06

NAT etherstub1

clprivnet0

clprivnet0

etherstub2

Page 6: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 6 / 42

Disable the NWAM service:

vorlon# svcadm disable nwam

Create the virtual network:

vorlon# dladm create-etherstub etherstub1vorlon# dladm create-vnic -l etherstub1 vnic11vorlon# dladm create-vnic -l etherstub1 vnic12vorlon# dladm create-vnic -l etherstub1 vnic13vorlon# dladm create-etherstub etherstub2vorlon# dladm create-vnic -l etherstub2 vnic21vorlon# dladm create-vnic -l etherstub2 vnic22

Add the IP addresses and aliases to /etc/inet/hosts:

vorlon# vi /etc/inet/hosts::1 vorlon vorlon.local localhost loghost127.0.0.1 vorlon.local localhost loghost## Internal network for VirtualBox10.0.2.100 vorlon-int10.0.2.121 s10-sc32-110.0.2.122 s10-sc32-210.0.2.130 s10-sc32-lh110.0.2.131 s10-sc32-lh210.0.2.140 zc1-z110.0.2.141 zc1-z210.0.2.142 zc2-z110.0.2.143 zc2-z2

Add the default netmasks for the used subnets to /etc/inet/netmasks:

vorlon# vi /etc/inet/netmasks10.0.1.0 255.255.255.010.0.2.0 255.255.255.0

Configure the internal host IP used to access the network to the VirtualBox guest:

vorlon# vi /etc/hostname.vnic11vorlon-int

Always plumb the vnics used by the VirtualBox guests when booting:

vorlon# touch /etc/hostname.vnic12 /etc/hostname.vnic13 /etc/hostname.vnic21 /etc/hostname.vnic22

If you want the VirtualBox guests to be able to reach the external network connected to either e1000g0 or iwh0, then setup ipfilter to perform Network Address Translation (NAT) for the internal virtual network:

2 Host Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 7: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 7 / 42

vorlon# vi /etc/ipf/ipf.confpass in allpass out all

vorlon# vi /etc/ipf/ipnat.confmap e1000g0 10.0.2.0/24 -> 0/32 portmap tcp/udp automap e1000g0 10.0.2.0/24 -> 0/32map iwh0 10.0.2.0/24 -> 0/32 portmap tcp/udp automap iwh0 10.0.2.0/24 -> 0/32

If you want to make e.g. the tomcat URL configured later in section 4.9 accessible from outside of the hosts external network, add the following line to /etc/ipf/ipnat.conf:

rdr e1000g0 0.0.0.0/0 port 8080 -> 10.0.2.130 port 8080 tcp

Configure the public network on e1000g0 depending on your individual setup.

The following example assumes a static IP configuration:

vorlon# vi /etc/hostname.e1000g010.0.1.42

vorlon# vi /etc/defaultrouter10.0.1.1

vorlon# vi /etc/resolv.confnameserver 10.0.1.1

vorlon# vi /etc/nsswitch.conf=> add dns to the hosts keyword:

hosts: files dns

Enable the static networking configuration:

vorlon# svcadm enable svc:/network/physical:default

Enable the service for ipfilter:

vorlon# svcadm enable svc:/network/ipfilter:default

Enable IPv4 forwarding:

vorlon# routeadm -u -e ipv4-forwarding

2.2.2 Filesystem Configuration

Create some additional file systems for:• crash dumps created for the host system (/var/crash)

2 Host Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 8: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 8 / 42

• downloads of various files (/data)• VirtualBox Images (/VirtualBox-Images)

vorlon# zfs create -o mountpoint=/var/crash -o compression=on rpool/crashvorlon# mkdir /var/crash/vorlonvorlon# zfs create -o mountpoint=/data rpool/datavorlon# zfs create -o mountpoint=/VirtualBox-Images rpool/vbox-imagesvorlon# chown scdemo:staff /data /VirtualBox-Images

2.2.3 COMSTAR / iSCSI Target Configuration

If you want to be able to practice with Solaris Cluster 3.2, it will be necessary to provide some shared storage to the cluster nodes running as VirtualBox guests.

Shared storage will be used for:• HA ZFS failover zpool for application data• quorum device using the software quorum feature

The easiest way to achieve shared storage between VirtualBox guests is to configure one or more iSCSI targets from the host system, and configure the Solaris running inside the VirtualBox guests as iSCSI initiators. Section 3.1.1 provides a diagram of the storage configuration used in this ex-ample.

First install the required packages for COMSTAR / iSCSI:

vorlon# pkg install SUNWiscsi SUNWiscsit SUNWstmfvorlon# init 6

Configure a ZFS volume, which will then get exported as iSCSI target. Note that this example just uses a volume of 2GB size – feel free to increase based in your needs and available disk space:

vorlon# zfs create -V 2gb rpool/iscsi-t1

vorlon# svcadm disable svc:/network/iscsi_initiator:defaultvorlon# svcadm enable stmfvorlon# svcadm enable target

vorlon# itadm create-targetTarget iqn.1986-03.com.sun:02:51720f58-cf97-eca4-c86e-9591ed87861c successfully createdvorlon# sbdadm create-lu /dev/zvol/rdsk/rpool/iscsi-t1

Created the following LU:

GUID DATA SIZE SOURCE-------------------------------- ------------------- ----------------600144f0000827bf93574ac359b20001 2147418112 /dev/zvol/rdsk/rpool/iscsi-t1

vorlon# stmfadm add-view 600144f0000827bf93574ac359b20001

2 Host Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 9: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 9 / 42

In a similar way more iSCSI targets can get configured, if required.

2.3 Install VirtualBox

Download VirtualBox from http://www.virtualbox.org/wiki/Downloads – select the archive for Solaris and OpenSolaris host on x86/amd64. Consult the VirtualBox User Guide for the complete installation instructions.

In this white paper version 3.0.8 has been used.

vorlon# pkgadd -G -d VirtualBoxKern-3.0.8-SunOS-r53138.pkgvorlon# pkgadd -G -d VirtualBox-3.0.8-SunOS-r53138.pkg

2.4 Install rdesktop

VirtualBox offers to start the guest using the VRDP protocol in order to access the guest console. rdesktop is a VRDP client that allows you to access the VRDP server, which VirtualBox starts for the guest.

vorlon# pkg install SUNWrdesktop

2.5 Download Solaris 10 05/09 (Update 7) ISO image

You can download the ISO image from https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=Sol10-U7-SP-x86-FULL-DVD-G-F@CDS-CDS_SMI.The following example will assume it to be available as /data/isos/Solaris10/Update7/x86-ga/sol-10-u7-ga-x86-dvd.iso.

Note that until CR 6888193 is fixed, do not try the specific configuration described in this white paper with Solaris 10 Update 8 or newer, since it will not work.

2.6 Download Solaris Cluster 3.2 01/09 archive

You can download the zip archive from http://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/VerifyItem-Start/suncluster_3_2u2-ga-solaris-x86.zip.The following example will assume it to be available as /data/SolarisCluster/3.2U2/x86-ga/sun-cluster_3_2u2-ga-solaris-x86.zip.

For HA MySQL you will need Patch 126033-07 or newer. It contains necessary changes to run that agent in zone clusters. If you have a sunsolve account, download it from http://sunsolve.sun.com/pdownload.do?target=126033-09&method=h and make it available as /data/SolarisCluster/126033-09.zip.

For HA Tomcat you will need Patch 126072-02 or newer. It contains necessary changes to run that agent in zone clusters. If you have a sunsolve account, download it from http://sunsolve.sun.com/pdownload.do?target=126072-02&method=h and make it availabe as /data/SolarisCluster/126072-02.zip.

2 Host Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 10: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 10 / 42

3 VirtualBox Configuration

3.1 VirtualBox Guest Configuration

The following diagram describes the desired disk configuration:

3 VirtualBox Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

VBox Guests10-sc32-1

VBox Guests10-sc32-2

Zpoolservices

d1

iSCSIInitiator

iSCSITarget

rpool rpool

S10-U7-SC32U2-1.vdi

S10-U7-SC32U2-2.vdi

Laptop vorlon

c0d0 c0d0

c3t2d0

rpool/iscsi-t1

c2t0d0

iSCSIInitiator

d1

c3t2d0

d1 = quorum device

Page 11: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 11 / 42

3.1.1 Virtual Disk Configuration

Create the boot disks for the two guests, size 30 GB (= 30720 MB, dynamically expanding image):• s10-sc32-1 will use S10-U7-SC-32U2-1.vdi• s10-sc32-2 will use S10-U7-SC-32U2-2.vdi

scdemo@vorlon$ /opt/VirtualBox/VBoxManage createhd --filename /VirtualBox-Images/S10-U7-SC-32U2-1.vdi --size 30720 --format VDI --variant Standard --rememberVirtualBox Command Line Management Interface Version 3.0.8(C) 2005-2009 Sun Microsystems, Inc.All rights reserved.

0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%Disk image created. UUID: 641be421-a838-4ac2-9ace-083aa1775f99scdemo@vorlon$ /opt/VirtualBox/VBoxManage createhd --filename /VirtualBox-Images/S10-U7-SC-32U2-2.vdi --size 30720 --format VDI --variant Standard --rememberVirtualBox Command Line Management Interface Version 3.0.8(C) 2005-2009 Sun Microsystems, Inc.All rights reserved.

0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%Disk image created. UUID: 34a938d9-9e65-4253-887a-2948d126deef

3.1.2 Virtual Machine Configuration

Determine the MAC addresses used by the vnics configured in section 2.2.1:

scdemo@vorlon$ dladm show-vnicLINK OVER SPEED MACADDRESS MACADDRTYPE VIDvnic11 etherstub1 0 2:8:20:fa:bf:c random 0vnic12 etherstub1 0 2:8:20:d5:47:9d random 0vnic13 etherstub1 0 2:8:20:e2:99:94 random 0vnic21 etherstub2 0 2:8:20:3a:34:a3 random 0vnic22 etherstub2 0 2:8:20:d3:bf:1a random 0

The following shows which vnic is used by which VirtualBox guest:

VirtualBox Guest Name VNIC used MAC address

S10-U7-SC-32U2-1 vnic12 020820D5479D

3 VirtualBox Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 12: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 12 / 42

vnic21 0208203A34A3

S10-U7-SC-32U2-2 vnic13 020820E29994

vnic22 020820D3BF1A

It is critical that the MAC address configured with the VirtualBox guest exactly matches with the MAC address configured for the corresponding vnic, otherwise network communication will not work.

Configure the virtual machines:

scdemo@vorlon$ /opt/VirtualBox/VBoxManage createvm --name S10-U7-SC-32U2-1 --ostype Solaris_64 --registerVirtualBox Command Line Management Interface Version 3.0.8(C) 2005-2009 Sun Microsystems, Inc.All rights reserved.Virtual machine 'S10-U7-SC-32U2-1' is created and registered.UUID: 44b912d0-5e3d-4063-9db4-47b3f5575701Settings file: '/export/home/scdemo/.VirtualBox/Machines/S10-U7-SC-32U2-1/S10-U7-SC-32U2-1.xml'scdemo@vorlon$ /opt/VirtualBox/VBoxManage modifyvm S10-U7-SC-32U2-1 --memory 1280 -hda /VirtualBox-Images/S10-U7-SC-32U2-1.vdi --boot1 disk --boot2 dvd --dvd /data/isos/Solaris10/Update7/x86-ga/sol-10-u7-ga-x86-dvd.iso --nic1 bridged --nictype1 82540EM --cableconnected1 on --bridgeadapter1 vnic12 --macaddress1 020820D5479D --nic2 bridged --nictype2 82540EM --cableconnected2 on --bridgeadapter2 vnic21 --macaddress2 0208203A34A3 --audio solaudio --audiocontroller ac97 --vrdp on --vrdpport 3390VirtualBox Command Line Management Interface Version 3.0.8(C) 2005-2009 Sun Microsystems, Inc.All rights reserved.

scdemo@vorlon$ /opt/VirtualBox/VBoxManage createvm --name S10-U7-SC-32U2-2 --ostype OpenSolaris_64 --registerVirtualBox Command Line Management Interface Version 3.0.8(C) 2005-2009 Sun Microsystems, Inc.All rights reserved.

Virtual machine 'S10-U7-SC-32U2-2' is created and registered.UUID: ce23d951-832b-4d50-9707-495c7ce0d30bSettings file: '/export/home/scdemo/.VirtualBox/Machines/S10-U7-SC-32U2-2/S10-U7-SC-32U2-2.xml'scdemo@vorlon$ /opt/VirtualBox/VBoxManage modifyvm S10-U7-SC-32U2-2 --memory 1280 -hda /VirtualBox-Images/S10-U7-SC-32U2-2.vdi --boot1 disk --boot2 dvd --dvd /data/isos/Solaris10/Update7/x86-ga/sol-10-u7-ga-x86-dvd.iso --nic1 bridged --nictype1 82540EM --cableconnected1 on --bridgeadapter1 vnic13 --macaddress1 020820E29994 --nic2 bridged --nictype2 82540EM --cableconnected2 on --bridgeadapter2 vnic22 --macaddress2 020820D3BF1A --audio solaudio --audiocontroller ac97 --vrdp on --vrdpport 3391VirtualBox Command Line Management Interface Version 3.0.8(C) 2005-2009 Sun Microsystems, Inc.All rights reserved.

3 VirtualBox Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 13: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 13 / 42

3.2 VirtualBox Guest Solaris Configuration

Both VirtualBox guest systems need to get installed with Solaris 10 05/09 (Update 7).

For generic information on how to install Solaris 10 05/09 (Update 7) see the official guides at http://docs.sun.com/app/docs/coll/1236.10?l=en.

In section 3.1.2 the corresponding ISO image has been configured for the guests.

3.2.1 First Guest Installation (S10-U7-SC-32U2-1)

Start the virtual machine while on a desktop session on the host:

scdemo@vorlon$ /opt/VirtualBox/VBoxManage startvm S10-U7-SC-32U2-1

This will start the console for S10-U7-SC-32U2-1 within the VirtualBox GUI. Perform the following steps (rough guidance on non-default selections):

• Select Installer: -> 3• Keyboard Layout: US-English• Language: English• Networked: Yes• Network Interface: e1000g0• Use DHCP: No• Hostname: s10-sc32-1• IP Address: 10.0.2.121• Part of Subnet: Yes• Netmask: 255.255.255.0• Enable IPv6: No• Configure Kerberos: No• Nameservice: None• NFSv4 Domain Config: Use the NFSv4 domain derived by the system• Timezone: <correct timezone>• Time: <correct time>• Root Password: <password>• Remote services enabled: No• Standard Installation• Geographic Region: North America (or the region of your choice)• Default locale: en_US_ISO8859-15 (or the locale of your choice)• Additional Products: None• Filesystem: ZFS• Solaris software to install: Entire Distribution• Disk device: c0d0• Select for swap: 1024, rest leave default values

The next step is to configure the static networking for s10-sc32-1. After the reboot, login as user root and perform the following steps in a terminal window:

s10-sc32-1 # vi /etc/inet/hosts

3 VirtualBox Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 14: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 14 / 42

::1 localhost loghost127.0.0.1 localhost loghost## Internal network for VirtualBox10.0.2.100 vorlon-int10.0.2.121 s10-sc32-1 s10-sc32-1.local10.0.2.122 s10-sc32-210.0.2.130 s10-sc32-lh110.0.2.131 s10-sc32-lh210.0.2.140 zc1-z110.0.2.141 zc1-z210.0.2.142 zc2-z110.0.2.143 zc2-z2

s10-sc32-1 # vi /etc/inet/netmasks10.0.2.0 255.255.255.0

s10-sc32-1 # vi /etc/hostname.e1000g0s10-sc32-1

s10-sc32-1 # vi /etc/defaultroutervorlon-int

In case you have the host system connected to external networking, configure a nameservice such as DNS:

s10-sc32-1 # vi /etc/resolv.confnameserver <nameserver-ip>

s10-sc32-1 # vi /etc/nsswitch.conf=> add dns to the hosts keyword:

hosts: files dns

In case you want the guest system to not run the graphical login, in order to conserve some main memory, logout from the gnome session and login through the text console as user root:

s10-sc32-1 # svcadm disable svc:/application/graphical-login/gdm:default

In case you want to allow remote ssh access for the root user (assumed later):

s10-sc32-1 # vi /etc/ssh/sshd_config=> change the PermitRootLogin setup from no to yes:

PermitRootLogin yes

s10-sc32-1 # svcadm restart ssh

Since the host system is running two VirtualBox guests at the same time, if the system gets loaded, it is possible that the guest Solaris 10 system will send a lot of the following messages to syslog:

<date> <nodename> genunix: [ID 313806 kern.notice] NOTICE: pm_tick delay of 3058 ms exceeds 2147 ms

3 VirtualBox Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 15: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 15 / 42

This message is also getting send to the system console and can slow down the whole system a lot. To prevent that, make the following modification to the syslog.conf file:

s10-sc32-1 # cp -p /etc/syslog.conf /etc/syslog.conf.origs10-sc32-1 # vi /etc/syslog.conf--- syslog.conf.orig Tue Mar 17 18:41:20 2009+++ syslog.conf Fri Oct 2 20:35:44 2009@@ -9,8 +9,8 @@ # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. #-*.err;kern.notice;auth.notice /dev/sysmsg-*.err;kern.debug;daemon.notice;mail.crit /var/adm/messages+*.err;kern.warning;auth.notice /dev/sysmsg+*.err;kern.debug;daemon.warning;mail.crit /var/adm/messages

*.alert;kern.err;daemon.err operator *.alert root

Note that this will cause all daemon.notice message not being send to the console or /var/adm/mes-sages.

Shutdown the guest:

s10-sc32-1 # init 5

Remove the OpenSolaris ISO image from future use:

scdemo@vorlon$ /opt/VirtualBox/VBoxManage modifyvm S10-U7-SC-32U2-1 --dvd noneVirtualBox Command Line Management Interface Version 3.0.8(C) 2005-2009 Sun Microsystems, Inc.All rights reserved.

3.2.2 Second Guest Installation (S10-U7-SC-32U2-2)

Start the virtual machine while on a desktop session on the host:

scdemo@vorlon$ /opt/VirtualBox/VBoxManage startvm S10-U7-SC-32U2-2

This will start the console for S10-U7-SC-32U2-2 within the VirtualBox GUI. Perform the following steps (rough guidance on non-default selections):

• Select Installer: -> 3• Keyboard Layout: US-English• Language: English• Networked: Yes• Network Interface: e1000g0• Use DHCP: No

3 VirtualBox Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 16: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 16 / 42

• Hostname: s10-sc32-2• IP Address: 10.0.2.122• Part of Subnet: Yes• Netmask: 255.255.255.0• Enable IPv6: No• Configure Kerberos: No• Nameservice: None• NFSv4 Domain Config: Use the NFSv4 domain derived by the system• Timezone: <correct timezone>• Time: <correct time>• Root Password: <password>• Remote services enabled: No• Standard Installation• Geographic Region: North America (or the region of your choice)• Default locale: en_US_ISO8859-15 (or the locale of your choice)• Additional Products: None• Filesystem: ZFS• Solaris software to install: Entire Distribution• Disk device: c0d0• Select for swap: 1024, rest leave default values

The next step is to configure the static networking for s10-sc32-2. After the reboot, login as user root and perform the following steps in a terminal window:

s10-sc32-2 # vi /etc/inet/hosts::1 localhost loghost127.0.0.1 localhost loghost## Internal network for VirtualBox10.0.2.100 vorlon-int10.0.2.121 s10-sc32-110.0.2.122 s10-sc32-2 s10-sc32-2.local10.0.2.130 s10-sc32-lh110.0.2.131 s10-sc32-lh210.0.2.140 zc1-z110.0.2.141 zc1-z210.0.2.142 zc2-z110.0.2.143 zc2-z2

s10-sc32-2 # vi /etc/inet/netmasks10.0.2.0 255.255.255.0

s10-sc32-2 # vi /etc/hostname.e1000g0s10-sc32-2

s10-sc32-2 # vi /etc/defaultroutervorlon-int

In case you have the host system connected to external networking, configure the nameservice like DNS:

3 VirtualBox Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 17: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 17 / 42

s10-sc32-2 # vi /etc/resolv.confnameserver <nameserver-ip>

s10-sc32-2 # vi /etc/nsswitch.conf=> add dns to the hosts keyword:

hosts: files dns

In case you want the guest system to not run the graphical login, in order to conserve some main memory, logout from the gnome session and login through the text console as user root:

s10-sc32-2 # svcadm disable svc:/application/graphical-login/gdm:default

In case you want to allow remote ssh access for the root user (assumed later):

s10-sc32-2 # vi /etc/ssh/sshd_config=> change the PermitRootLogin setup from no to yes:

PermitRootLogin yes

s10-sc32-2 # svcadm restart ssh

Since the host system is running two VirtualBox guests at the same time, if the system gets loaded, it is possible that the guest Solaris 10 system will send a lot of the following messages to syslog:

<date> <nodename> genunix: [ID 313806 kern.notice] NOTICE: pm_tick delay of 3058 ms exceeds 2147 ms

This message is also getting send to the system console and can slow down the whole system a lot. To prevent that, make the following modification to the syslog.conf file:

s10-sc32-2 # cp -p /etc/syslog.conf /etc/syslog.conf.origs10-sc32-2 # vi /etc/syslog.conf--- syslog.conf.orig Tue Mar 17 18:41:20 2009+++ syslog.conf Fri Oct 2 20:35:44 2009@@ -9,8 +9,8 @@ # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. #-*.err;kern.notice;auth.notice /dev/sysmsg-*.err;kern.debug;daemon.notice;mail.crit /var/adm/messages+*.err;kern.warning;auth.notice /dev/sysmsg+*.err;kern.debug;daemon.warning;mail.crit /var/adm/messages

*.alert;kern.err;daemon.err operator *.alert root

Note that this will cause all daemon.notice message not being send to the console or /var/adm/mes-sages.

Shutdown the guest:

3 VirtualBox Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 18: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 18 / 42

s10-sc32-2 # init 5

Remove the OpenSolaris ISO image from future use:

scdemo@vorlon$ /opt/VirtualBox/VBoxManage modifyvm S10-U7-SC-32U2-2 --dvd noneVirtualBox Command Line Management Interface Version 3.0.8(C) 2005-2009 Sun Microsystems, Inc.All rights reserved.

3.3 Getting Crash dumps from Solaris guests

Sometimes it is necessary for debugging purposes to create a crash dump of a Solaris guest, either because it is hung or there is no other way to interact with it, or because a specific state of the sys-tem is of interest for further analysis.

3.3.1 Booting Solaris with kernel debugger enabled

The first step is to boot the Solaris guest with the kernel debugger enabled. This step can be used for a one-time kernel debugger boot:

o when the grub line comes up, hit 'e'o go to the kernel$ line and hit 'e' to EDIT it o hit backspace/delete to remove ",console=graphics" o add “ -k” to the line o the line should now look like kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS -k o hit return to enter changes and go backo hit 'b' to boot

If you want to always boot with the kernel debugger enabled, the above change needs to be made to the /rpool/boot/grub/menu.lst file to the corresponding entry. Example, add the following:

# vi /rpool/boot/grub/menu.lsttitle Solaris 10 5/09 s10x_u7wos_08 X86 debugfindroot (pool_rpool,0,a)kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS -kmodule /platform/i86pc/boot_archive

3.3.2 How to break into the kernel debugger

On a physical x86 system, the default key combination to break into the kernel debugger is the F1-a. This does not work when Solaris is running as a VirtualBox guest. You can either change the de-fault key abort sequence using the kbd(1) command, or use the following in order to send F1-a to a VirtualBox guest:

scdemo@vorlon$ /opt/VirtualBox/VBoxManage controlvm <solarisVMname> keyboardputscancode 3b 1e 9e bb

3 VirtualBox Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 19: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 19 / 42

3.3.3 Forcing a crash dump

Once you have entered the kernel debugger prompt, the following will cause a crash dump to be written to the dump device:

> $<systemdump

See dumpadm(1M) for details on how to configure a dump device and savecore directory.

After the system has rebooted, either the svc:/system/dumpadm:default service will automat-ically save the crash dump into the configured savecore directory, or you need to manually run savecore(1M), if the dumpadm service is disabled.

If you want to save a crash dump of the live running Solaris system without breaking into the kernel debugger or requiring a reboot, run within that system:

# savecore -L

If you want to force a crash dump before rebooting the system, run within that system:

# reboot -d

3.3.4 Crash dump analysis with Solaris CAT

While it is possible to perform analysis of crash dumps using mdb(1), the Solaris Crash Analysis Tool (CAT) comes with additional commands and macros, which are useful to get a quick overview of the crash cause.

Solaris CAT is available through http://blogs.sun.com/solariscat/, which contains the download link to the most current version.

After installation of the corresponding SUNWscat package you can read the documentation at file:///opt/SUNWscat/docs/index.html.

3 VirtualBox Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 20: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 20 / 42

4 Solaris Cluster Configuration

The following diagram shows the desired Solaris Cluster configuration:

4 Solaris Cluster Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

s10-sc32-1 s10-sc32-2

zc2

Zone Cluster

zc1

Zone Cluster

zpool: services

2 Node Physical Cluster

zc2-z1

zc1-z1

RG service-rg:

service-lh-rsservice-hasp-rsmysql-rstomcat-rs

RG apache-rg:apache-rs

RG shared-ip-rg:shared-ip-rs

zc2-z2

RG apache-rg:apache-rs

RG shared-ip-rg:shared-ip-rs

zc1-z1

RG service-rg:

service-lh-rsservice-hasp-rsmysql-rstomcat-rs

Page 21: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 21 / 42

4.1 Solaris Cluster Installation

Start both nodes. In case you don't want the console window open all time, start the VirtualBox guests by using the VRDP protocol.

The following ports got configured for the guests:

S10-U7-SC-32U2-1 3390

S10-U7-SC-32U2-2 3391

scdemo@vorlon$ /opt/VirtualBox/VBoxManage startvm S10-U7-SC-32U2-1 --type vrdp

scdemo@vorlon$ /opt/VirtualBox/VBoxManage startvm S10-U7-SC-32U2-2 --type vrdp

The console can be reached via the the rdesktop application.

Console for s10-sc32-1:

scdemo@vorlon$ rdesktop localhost:3390

Console for s10-sc32-2:

scdemo@vorlon$ rdesktop localhost:3391

4.1.1 First node cluster installation (s10-sc32-1)

Copy the Solaris Cluster archive to the cluster node, unpack the archive and start the Installer. In this case X11 forwarding through ssh is getting used:

scdemo@vorlon$ scp /data/SolarisCluster/3.2U2/x86-ga/suncluster_3_2u2-ga-solaris-x86.zip root@s10-sc32-1:/var/tmpscdemo@vorlon$ ssh -g -X s10-sc32-1 -l roots10-sc32-1 # cd /var/tmps10-sc32-1 # mkdir SCs10-sc32-1 # cd SCs10-sc32-1 # unzip ../suncluster_3_2u2-ga-solaris-x86.zips10-sc32-1 # rm ../suncluster_3_2u2-ga-solaris-x86.zips10-sc32-1 # cd Solaris_x86s10-sc32-1 # ./installer

Follow instructions on the screen to install Sun Cluster framework software and data services on the node.

Select the following for installation:• Sun Cluster 3.2 01/09• Sun Cluster Agents 3.2 01/09• All Shared Components

4 Solaris Cluster Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 22: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 22 / 42

Choose “Configure Later” when prompted whether to configure Sun Cluster framework software. After installation is finished, you can view any available installation log.

Add /usr/cluster/bin to $PATH and /usr/cluster/man to $MANPATH within $HOME/.profile for user root.

4.1.2 First node cluster configuration (s10-sc32-1)

Allow RPC communication for external systems:

s10-sc32-1 # svccfg -s svc:/network/rpc/bind setprop config/local_only = falses10-sc32-1 # svcadm refresh svc:/network/rpc/bind

Enable remote access to webconsole:

s10-sc32-1 # svccfg -s system/webconsole setprop options/tcp_listen = trues10-sc32-1 # svcadm refresh system/webconsole

Install the first cluster node:• the cluster name is set to s10-sc32-demo• the lofi option is used for global devices• the nodes s10-sc32-1 and s10-sc32-2 are part of the cluster• the default IP subnet of 172.16.0.0 is getting used for the cluster interconnect. If you share

the interconnect from multiple clusters on the same public IP subnet, you need to make sure to configure a unique IP subnet for each cluster.

• e1000g1 is the network interface used for the cluster interconnect, which is attached to the switch etherstub2

• global fencing is disabled.

s10-sc32-1 # /usr/cluster/bin/scinstall \ -i \ -C s10-sc32-demo \ -F \ -G lofi \ -T node=s10-sc32-1,node=s10-sc32-2,authtype=sys \ -w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=64,maxprivatenets=10,numvirtualclusters=12 \ -A trtype=dlpi,name=e1000g1 \ -B type=switch,name=etherstub2 \ -m endpoint=:e1000g1,endpoint=etherstub2 \ -e global_fencing=nofencing

Disable MPxIO for iSCSI:

s10-sc32-1 # vi /kernel/drv/iscsi.conf=> change the mpxio-disable setup from no to yes:

mpxio-disable="yes";

4 Solaris Cluster Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 23: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 23 / 42

Reboot the node:

s10-sc32-1 # init 6

4.1.3 Second node cluster installation (s10-sc32-2)

Copy the Solaris Cluster archive to the cluster node, unpack the archive and start the Installer. In this case X11 forwarding through ssh is getting used:

scdemo@vorlon$ scp /data/SolarisCluster/3.2U2/x86-ga/suncluster_3_2u2-ga-solaris-x86.zip root@s10-sc32-2:/var/tmpscdemo@vorlon$ ssh -g -X s10-sc32-2 -l roots10-sc32-2 # cd /var/tmps10-sc32-2 # mkdir SCs10-sc32-2 # cd SCs10-sc32-2 # unzip ../suncluster_3_2u2-ga-solaris-x86.zips10-sc32-2 # rm ../suncluster_3_2u2-ga-solaris-x86.zips10-sc32-2 # cd Solaris_x86s10-sc32-2 # ./installer

Follow instructions on the screen to install Sun Cluster framework software and data services on the node.

Select the following for installation:• Sun Cluster 3.2 01/09• Sun Cluster Agents 3.2 01/09• All Shared Components

Choose “Configure Later” when prompted whether to configure Sun Cluster framework software. After installation is finished, you can view any available installation log.

Add /usr/cluster/bin to $PATH and /usr/cluster/man to $MANPATH within $HOME/.profile for the main user and user root.

4.1.4 Second node cluster configuration (s10-sc32-2)

Allow RPC communication for external systems:

s10-sc32-2 # svccfg -s svc:/network/rpc/bind setprop config/local_only = falses10-sc32-2 # svcadm refresh svc:/network/rpc/bind

Enable remote access to webconsole:

s10-sc32-2 # svccfg -s system/webconsole setprop options/tcp_listen = trues10-sc32-2 # svcadm refresh system/webconsole

Add the second node to the cluster:

4 Solaris Cluster Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 24: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 24 / 42

• the cluster name to join is s10-sc32-demo• the sponsoring node is s10-sc32-1• the lofi option is used for global devices• e1000g1 is the network interface used for the cluster interconnect, which is attached to the

switch etherstub2

s10-sc32-2 # /usr/cluster/bin/scinstall \ -i \ -C s10-sc32-demo \ -N s10-sc32-1 \ -G lofi \ -A trtype=dlpi,name=e1000g1 \ -m endpoint=:e1000g1,endpoint=etherstub2

Disable MPxIO for iSCSI:

s10-sc32-2 # vi /kernel/drv/iscsi.conf=> change the mpxio-disable setup from no to yes:

mpxio-disable="yes";

Reboot the node:

s10-sc32-2 # init 6

4.2 iSCSI Initiator Configuration

Configure the iSCSI initiator on both nodes for using the iSCSI target configured in section 2.2.3:

both-nodes# iscsiadm modify discovery -s enableboth-nodes# iscsiadm add static-config iqn.1986-03.com.sun:02:51720f58-cf97-eca4-c86e-9591ed87861c,10.0.2.100both-nodes# devfsadm -i iscsiboth-nodes# cldev refreshboth-nodes# cldev populates10-sc32-1 # cldev list -vDID Device Full Device Path---------- ----------------d1 s10-sc32-2:/dev/rdsk/c3t2d0d1 s10-sc32-1:/dev/rdsk/c3t2d0d2 s10-sc32-1:/dev/rdsk/c0d0d3 s10-sc32-1:/dev/rdsk/c1t0d0d4 s10-sc32-2:/dev/rdsk/c1t0d0d5 s10-sc32-2:/dev/rdsk/c0d0

4.3 ZFS zpool Configuration for Data

If you want to use the storage devices used as quorum device as part of a ZFS zpool, then it is im-portant to create first the zpool, before configuring the device as quorum device. When ZFS is adding a device to a zpool, it writes an EFI label to it, which would overwrite existing quorum device information.

4 Solaris Cluster Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 25: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 25 / 42

In this example we use the iSCSI target from section 2.2.3 for both, as part of the zpool and as quor-um device.

Create the zpool first:

s10-sc32-1 # zpool create services /dev/rdsk/c3t2d0s10-sc32-1 # zpool export services

4.4 Software Quorum Configuration

The software quorum feature will be automatically used if fencing for the device has been disabled. In this example we configure the iSCSI target from section 2.2.3 as software quorum device, since COMSTAR on OpenSolaris 2009.06 does not yet support SCSI3 persistent group reservation for iSCSI targets:

s10-sc32-1 # cldevice set -p default_fencing=nofencing d1s10-sc32-1 # clquorum add d1s10-sc32-1 # clquorum resets10-sc32-1 # claccess deny-all

As an alternative, you can configure a quorum device as quorum server. The procedure is explained at http://docs.sun.com/app/docs/doc/820-4677/cihecfab?l=en&a=view .

For the laptop configuration it would be possible to configure the quorum server on the host vorlon.

4.5 IPsec Configuration for the cluster interconnect

This step is optional and is a new feature of Solaris Cluster 3.2 01/09. It is now possible to configure IPsec on the cluster interconnect in order to protect the private TCP/IP traffic by encrypting the IP packets. Note that the cluster heartbeat packets are send on the DLPI level lower than IP, which means they are not getting encrypted.

The following steps configure IPsec by using the Internet Key Exchange (IKE) method.

Prepare /etc/inet/ipsecinit.conf on both nodes:

both-nodes# cd /etc/inetboth-nodes# cp ipsecinit.sample ipsecinit.conf

s10-sc32-1 # ifconfig e1000g1e1000g1: flags=201008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4,CoS> mtu 1500 index 3 inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255 ether 2:8:20:3a:34:a3s10-sc32-1 # ifconfig clprivnet0clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 4 inet 172.16.4.1 netmask fffffe00 broadcast 172.16.5.255

4 Solaris Cluster Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 26: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 26 / 42

ether 0:0:0:0:0:1

s10-sc32-2 # ifconfig e1000g1e1000g1: flags=201008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4,CoS> mtu 1500 index 3 inet 172.16.0.130 netmask ffffff80 broadcast 172.16.0.255 ether 2:8:20:d3:bf:1a

s10-sc32-2 # ifconfig clprivnet0clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 4 inet 172.16.4.2 netmask fffffe00 broadcast 172.16.5.255 ether 0:0:0:0:0:2

s10-sc32-1 # vi ipsecinit.conf{laddr 172.16.0.129 raddr 172.16.0.130} ipsec {auth_algs any encr_algs any sa shared}{laddr 172.16.4.1 raddr 172.16.4.2} ipsec {auth_algs any encr_algs any sa shared}

s10-sc32-2 # vi ipsecinit.conf{laddr 172.16.0.130 raddr 172.16.0.129} ipsec {auth_algs any encr_algs any sa shared}{laddr 172.16.4.2 raddr 172.16.4.1} ipsec {auth_algs any encr_algs any sa shared}

Prepare /etc/inet/ike/config on both nodes:

both-nodes# cd /etc/inet/ikeboth-nodes# cp config.sample config

s10-sc32-1 # vi config{ label "clusternode1-priv-physical1-clusternode2-priv-physical1" local_addr 172.16.0.129 remote_addr 172.16.0.130 p1_xform { auth_method preshared oakley_group 5 auth_alg md5 encr_alg 3des} p2_pfs 5 p2_idletime_secs 30}{ label "clusternode1-priv-privnet0-clusternode2-priv-privnet0" local_addr 172.16.4.1 remote_addr 172.16.4.2 p1_xform { auth_method preshared oakley_group 5 auth_alg md5 encr_alg 3des} p2_pfs 5 p2_idletime_secs 30}

s10-sc32-2 # vi config{ label "clusternode2-priv-physical1-clusternode1-priv-physical1"

4 Solaris Cluster Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 27: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 27 / 42

local_addr 172.16.0.130 remote_addr 172.16.0.129 p1_xform { auth_method preshared oakley_group 5 auth_alg md5 encr_alg 3des} p2_pfs 5 p2_idletime_secs 30}{ label "clusternode2-priv-privnet0-clusternode1-priv-privnet0" local_addr 172.16.4.2 remote_addr 172.16.4.1 p1_xform { auth_method preshared oakley_group 5 auth_alg md5 encr_alg 3des} p2_pfs 5 p2_idletime_secs 30}

both-nodes# /usr/lib/inet/in.iked -c -f /etc/inet/ike/configin.iked: Configuration file /etc/inet/ike/config syntactically checks out.

Setup entries for pre-shared keys in /etc/inet/secret/ike.preshared on both nodes:

both-nodes# cd /etc/inet/secret

s10-sc32-1 # pktool genkey keystore=file outkey=ikekey keytype=3des keylen=192 print=y Key Value ="329b7f792c5854dfd654674adf9220c45851dc61291c893b"

s10-sc32-1 # vi ike.preshared{ localidtype IP localid 172.16.0.129 remoteidtype IP remoteid 172.16.0.130 key 329b7f792c5854dfd654674adf9220c45851dc61291c893b}{ localidtype IP localid 172.16.4.1 remoteidtype IP remoteid 172.16.4.2 key 329b7f792c5854dfd654674adf9220c45851dc61291c893b}

s10-sc32-2 # vi ike.preshared{ localidtype IP localid 172.16.0.130 remoteidtype IP remoteid 172.16.0.129 key 329b7f792c5854dfd654674adf9220c45851dc61291c893b}{ localidtype IP localid 172.16.4.2 remoteidtype IP

4 Solaris Cluster Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 28: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 28 / 42

remoteid 172.16.4.1 key 329b7f792c5854dfd654674adf9220c45851dc61291c893b}

both-nodes# svcadm enable svc:/network/ipsec/ike:defaultboth-nodes# svcadm restart svc:/network/ipsec/policy:default

4.6 Zone Cluster Configuration

Create the zfs file system for the zone root paths on each cluster node:

s10-sc32-1 # zfs create -o mountpoint=/zones rpool/zones

s10-sc32-2 # zfs create -o mountpoint=/zones rpool/zones

4.6.1 First Zone Cluster Configuration (zc1)

Create configuration file for the first zone cluster, named zc1:

s10-sc32-1 # vi /var/tmp/zc1.txtcreateset zonepath=/zones/zc1set brand=clusterset enable_priv_net=trueset ip-type=sharedset autoboot=trueadd nodeset physical-host=s10-sc32-1set hostname=zc1-z1add netset address=10.0.2.140set physical=e1000g0endendadd nodeset physical-host=s10-sc32-2set hostname=zc1-z2add netset address=10.0.2.141set physical=e1000g0endendadd netset address=10.0.2.130endadd datasetset name=servicesendadd sysidset system_locale=C

4 Solaris Cluster Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 29: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 29 / 42

set terminal=vt220set security_policy=NONEset name_service=NONEset nfs4_domain=dynamicset timezone=METset root_password=<crypted password string>endcommitexit

Configure the zone cluster zc1:

s10-sc32-1 # clzc configure -f /var/tmp/zc1.txt zc1s10-sc32-1 # clzc verify zc1Waiting for zone verify commands to complete on all the nodes of the zone cluster "zc1"…

Install the zone cluster zc1:

s10-sc32-1 # clzc install zc1Waiting for zone install commands to complete on all the nodes of the zone cluster "zc1"…

Note that this step can take a while. It populates the zone root path with package content. Output is getting send to the console of each node (global zone) where you can monitor the progress.

Boot the zone cluster zc1:

s10-sc32-1 # clzc boot zc1

=> on s10-sc32-1: zlogin -C zc1 => on s10-sc32-2: zlogin -C zc1

Perform the following steps in both zones, zc1-z1 and zc1-z2:

Enable SSH root login for user root:

both-zones# vi /etc/ssh/sshd_config=> change the PermitRootLogin setup from no to yes:

PermitRootLogin yes

both-zones# svcadm restart ssh

Add the cluster IP addresses to /etc/inet/hosts:

both-zones# vi /etc/hosts#10.0.2.140 zc1-z110.0.2.141 zc1-z2#

4 Solaris Cluster Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 30: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 30 / 42

# logical hosts10.0.2.130 s10-sc32-lh1## Base cluster nodes10.0.2.121 s10-sc32-110.0.2.122 s10-sc32-2## Internal network for VirtualBox10.0.2.100 vorlon-int

Disable unneeded services within both zones in order to conserve some main memory:

both-zones# svcadm disable svc:/application/graphical-login/cde-login:defaultboth-zones# svcadm disable webconsoleboth-zones# svcadm disable svc:/network/rpc/cde-calendar-manager:defaultboth-zones# svcadm disable svc:/network/rpc/cde-ttdbserver:tcpboth-zones# svcadm disable svc:/application/cde-printinfo:defaultboth-zones# svcadm disable svc:/application/font/fc-cache:defaultboth-zones# svcadm disable svc:/application/management/wbem:defaultboth-zones# svcadm disable svc:/application/font/stfsloader:defaultboth-zones# svcadm disable svc:/application/opengl/ogl-select:defaultboth-zones# svcadm disable svc:/application/x11/xfs:defaultboth-zones# svcadm disable svc:/application/print/ppd-cache-update:defaultboth-zones# svcadm disable svc:/network/smtp:sendmailboth-zones# svcadm disable svc:/application/stosreg:defaultboth-zones# svcadm disable svc:/application/management/seaport:defaultboth-zones# svcadm disable svc:/application/management/sma:defaultboth-zones# svcadm disable svc:/application/management/snmpdx:defaultboth-zones# svcadm disable svc:/application/management/dmi:default

4.6.2 Second Zone Cluster Configuration (zc2)

Create configuration file for the first zone cluster, named zc2:

s10-sc32-1 # vi /var/tmp/zc2.txtcreateset zonepath=/zones/zc2set brand=clusterset enable_priv_net=trueset ip-type=sharedset autoboot=trueadd nodeset physical-host=s10-sc32-1set hostname=zc2-z1add netset address=10.0.2.142set physical=e1000g0endend

4 Solaris Cluster Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 31: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 31 / 42

add nodeset physical-host=s10-sc32-2set hostname=zc2-z2add netset address=10.0.2.143set physical=e1000g0endendadd netset address=10.0.2.131endadd sysidset system_locale=Cset terminal=vt220set security_policy=NONEset name_service=NONEset nfs4_domain=dynamicset timezone=METset root_password=<crypted password string>endcommitexit

Configure the zone cluster zc2:

s10-sc32-1 # clzc configure -f /var/tmp/zc2.txt zc2s10-sc32-1 # clzc verify zc2Waiting for zone verify commands to complete on all the nodes of the zone cluster "zc2"…

Install the zone cluster zc2:

s10-sc32-1 # clzc install zc2Waiting for zone install commands to complete on all the nodes of the zone cluster "zc2"…

Note that this step can take a while. It populates the zone root path with package content. Output is getting send to the console of each node (global zone) where you can monitor the progress.

Boot the zone cluster zc2:

s10-sc32-1 # clzc boot zc2

=> on s10-sc32-1: zlogin -C zc2 => on s10-sc32-2: zlogin -C zc2

Perform the following steps in both zones, zc2-z1 and zc2-z2:

Enable SSH root login for user root:

both-zones# vi /etc/ssh/sshd_config

4 Solaris Cluster Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 32: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 32 / 42

=> change the PermitRootLogin setup from no to yes:PermitRootLogin yes

both-zones# svcadm restart ssh

Add the cluster IP addresses to /etc/inet/hosts:

both-zones# vi /etc/hosts#10.0.2.142 zc2-z110.0.2.143 zc2-z2## logical hosts10.0.2.131 s10-sc32-lh2## Base cluster nodes10.0.2.121 s10-sc32-110.0.2.122 s10-sc32-2## Internal network for VirtualBox10.0.2.100 vorlon-int

Disable unneeded services within both zones in order to conserve some main memory:

both-zones# svcadm disable svc:/application/graphical-login/cde-login:defaultboth-zones# svcadm disable webconsoleboth-zones# svcadm disable svc:/network/rpc/cde-calendar-manager:defaultboth-zones# svcadm disable svc:/network/rpc/cde-ttdbserver:tcpboth-zones# svcadm disable svc:/application/cde-printinfo:defaultboth-zones# svcadm disable svc:/application/font/fc-cache:defaultboth-zones# svcadm disable svc:/application/management/wbem:defaultboth-zones# svcadm disable svc:/application/font/stfsloader:defaultboth-zones# svcadm disable svc:/application/opengl/ogl-select:defaultboth-zones# svcadm disable svc:/application/x11/xfs:defaultboth-zones# svcadm disable svc:/application/print/ppd-cache-update:defaultboth-zones# svcadm disable svc:/network/smtp:sendmailboth-zones# svcadm disable svc:/application/stosreg:defaultboth-zones# svcadm disable svc:/application/management/seaport:defaultboth-zones# svcadm disable svc:/application/management/sma:defaultboth-zones# svcadm disable svc:/application/management/snmpdx:defaultboth-zones# svcadm disable svc:/application/management/dmi:default

4.7 Resource Group and HA ZFS Configuration (zc1)

Register the SUNW.gds and SUNW.HAStoragePlus resource type and create resource group ser-vice-rg, resource service-hasp-rs for the zpool and resource service-lh-rs for the logical host on one node:

zc1-z1 # clrg create service-rgzc1-z1 # clrt register SUNW.HAStoragePlus

4 Solaris Cluster Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 33: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 33 / 42

zc1-z1 # clrt register SUNW.gdszc1-z1 # clrs create -g service-rg -t HAStoragePlus -p Zpools=services service-hasp-rszc1-z1 # clrslh create -g service-rg -h s10-sc32-lh1 service-lh-rszc1-z1 # clrg online -eM service-rg

4.8 HA MySQL Configuration (zc1)

This example uses the MySQL 4.0.31 package installed by default into /usr/sfw when using Solaris 10 05/09 (Update 7).

Install Patch 126033 on both nodes (s10-sc32-1 and s10-sc32-2):

vorlon# scp /data/SolarisCluster/126033-09.zip root@s10-sc32-1:/var/tmpvorlon# scp /data/SolarisCluster/126033-09.zip root@s10-sc32-2:/var/tmp

both-nodes# cd /var/tmpboth-nodes# unzip 126033-09.zipboth-nodes# patchadd 126033-09

Configure the mysql user and group on both zones:

zc1-z1 # groupadd -g 1000 mysqlzc1-z1 # useradd -g 1000 -d /services/mysql -s /bin/ksh mysqlzc1-z2 # groupadd -g 1000 mysqlzc1-z2 # useradd -g 1000 -d /services/mysql -s /bin/ksh mysql

Create a link from /usr/sfw/sbin/mysqld to /usr/sfw/bin/mysqld on both zones. This is required since the HA MySQL agent either expects mysqld within bin or libexec:

s10-sc32-1 # ln -s /usr/sfw/sbin/mysqld /usr/sfw/bin/mysqlds10-sc32-2 # ln -s /usr/sfw/sbin/mysqld /usr/sfw/bin/mysqld

Configure MySQL on the node where the services-rg resource group is online:

zc1-z1 # clrg status service-rg

=== Cluster Resource Groups ===

Group Name Node Name Suspended Status---------- --------- --------- ------service-rg zc1-z1 No Online zc1-z2 No Offline

s10-sc32-1 # zfs create services/mysqlzc1-z1 # mkdir -p /services/mysql/logszc1-z1 # mkdir -p /services/mysql/innodbzc1-z1 # cp /usr/sfw/share/mysql/my-small.cnf /services/mysql/my.cnfzc1-z1 # vi /services/mysql/my.cnf--- /usr/sfw/share/mysql/my-small.cnf Thu Jun 12 14:10:10 2008

4 Solaris Cluster Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 34: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 34 / 42

+++ /services/mysql/my.cnf Wed Oct 14 18:14:17 2009@@ -18,7 +18,7 @@ [client] #password = your_password port = 3306-socket = /tmp/mysql.sock+socket = /tmp/s10-sc32-lh1.sock # Here follows entries for some specific programs @@ -25,7 +25,7 @@ # The MySQL server [mysqld] port = 3306-socket = /tmp/mysql.sock+socket = /tmp/s10-sc32-lh1.sock skip-locking key_buffer = 16K max_allowed_packet = 1M@@ -50,19 +50,19 @@ #skip-bdb # Uncomment the following if you are using InnoDB tables-#innodb_data_home_dir = /var/mysql/-#innodb_data_file_path = ibdata1:10M:autoextend-#innodb_log_group_home_dir = /var/mysql/-#innodb_log_arch_dir = /var/mysql/+innodb_data_home_dir = /services/mysql/innodb+innodb_data_file_path = ibdata1:10M:autoextend+innodb_log_group_home_dir = /services/mysql/innodb+innodb_log_arch_dir = /services/mysql/innodb # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high-#innodb_buffer_pool_size = 16M-#innodb_additional_mem_pool_size = 2M+innodb_buffer_pool_size = 16M+innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size-#innodb_log_file_size = 5M-#innodb_log_buffer_size = 8M-#innodb_flush_log_at_trx_commit = 1-#innodb_lock_wait_timeout = 50+innodb_log_file_size = 5M+innodb_log_buffer_size = 8M+innodb_flush_log_at_trx_commit = 1+innodb_lock_wait_timeout = 50 [mysqldump] quick@@ -83,3 +83,6 @@ [mysqlhotcopy] interactive-timeout

4 Solaris Cluster Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 35: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 35 / 42

++bind-address=s10-sc32-lh1+

zc1-z1 # /usr/sfw/bin/mysql_install_db --datadir=/services/mysqlPreparing db tablePreparing host tablePreparing user tablePreparing func tablePreparing tables_priv tablePreparing columns_priv tableInstalling all prepared tables091014 18:29:33 /usr/sfw/sbin/mysqld: Shutdown Complete

To start mysqld at boot time you have to copy support-files/mysql.serverto the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !To do so, start the server, then issue the following commands:/usr/sfw/bin/mysqladmin -u root password 'new-password'/usr/sfw/bin/mysqladmin -u root -h zc1-z1 password 'new-password'See the manual for more instructions.

You can start the MySQL daemon with:/usr/sfw/bin/mysqld_safe &

You can test the MySQL daemon with the tests in the 'mysql-test' directory:cd /usr/sfw/mysql/mysql-test; ./mysql-test-run

Please report any problems with the /usr/sfw/bin/mysqlbug script!

The latest information about MySQL is available on the web athttp://www.mysql.comSupport MySQL by buying support/licenses at http://shop.mysql.com

zc1-z1 # chown -R mysql:mysql /services/mysql

Manually test the MySQL configuration:

zc1-z1 # /usr/sfw/sbin/mysqld --defaults-file=/services/mysql/my.cnf --basedir=/usr/sfw --datadir=/services/mysql --user=mysql --pid-file=/services/mysql/mysqld.pid &zc1-z1 # /usr/sfw/bin/mysql -S /tmp/s10-sc32-lh1.sock -urootWelcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 3 to server version: 4.0.31

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> exit;Bye

4 Solaris Cluster Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 36: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 36 / 42

Configure the MySQL admin password for the admin user:

zc1-z1 # /usr/sfw/bin/mysqladmin -S /tmp/s10-sc32-lh1.sock password 'mysqladmin'

Allow access to the database for both cluster nodes for user root:

zc1-z1 # /usr/sfw/bin/mysql -S /tmp/s10-sc32-lh1.sock -uroot -p'mysqladmin'Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 3 to server version: 4.0.31

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> use mysql;Reading table information for completion of table and column namesYou can turn off this feature to get a quicker startup with -A

Database changedmysql> GRANT ALL ON *.* TO 'root'@'zc1-z1' IDENTIFIED BY 'mysqladmin';Query OK, 0 rows affected (0.01 sec)

mysql> GRANT ALL ON *.* TO 'root'@'zc1-z2' IDENTIFIED BY 'mysqladmin';Query OK, 0 rows affected (0.00 sec)

mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='zc1-z1';Query OK, 1 row affected (0.02 sec)Rows matched: 1 Changed: 1 Warnings: 0

mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='zc1-z2';Query OK, 0 rows affected (0.01 sec)Rows matched: 1 Changed: 0 Warnings: 0

mysql> exit; Bye

Create and setup the HA MySQL resource configuration file:

zc1-z1 # mkdir /services/mysql/cluster-configzc1-z1 # cd /services/mysql/cluster-configzc1-z1 # cp /opt/SUNWscmys/util/ha_mysql_config .zc1-z1 # cp /opt/SUNWscmys/util/mysql_config .

zc1-z1 # vi ha_mysql_configRS=mysql-rsRG=service-rgPORT=3306LH=service-lh-rsSCALABLE=

4 Solaris Cluster Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 37: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 37 / 42

LB_POLICY=HAS_RS=service-hasp-rs

ZONE=ZONE_BT=PROJECT=

BASEDIR=/usr/sfwDATADIR=/services/mysqlMYSQLUSER=mysqlMYSQLHOST=s10-sc32-lh1FMUSER=fmuserFMPASS=fmuserLOGDIR=/services/mysql/logsCHECK=YESNDB_CHECK=

zc1-z1 # vi mysql_configMYSQL_BASE=/usr/sfwMYSQL_USER=rootMYSQL_PASSWD=mysqladminMYSQL_HOST=s10-sc32-lh1FMUSER=fmuserFMPASS=fmuserMYSQL_SOCK=/tmp/s10-sc32-lh1.sockMYSQL_NIC_HOSTNAME="zc1-z1 zc1-z2"MYSQL_DATADIR=/services/mysqlNBD_CHECK=

zc1-z1 # /opt/SUNWscmys/util/mysql_register -f /services/mysql/cluster-config/mysql_config

MySQL version 4 detected on 5.10

Check if the MySQL server is running and accepting connections

Add faulmonitor user (fmuser) with password (fmuser) with Process-,Select-, Reload- and Shutdown-privileges to user table for mysql database for host zc1-z1

Add SUPER privilege for fmuser@zc1-z1

Add faulmonitor user (fmuser) with password (fmuser) with Process-,Select-, Reload- and Shutdown-privileges to user table for mysql database for host zc1-z2

Add SUPER privilege for fmuser@zc1-z2

4 Solaris Cluster Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 38: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 38 / 42

Create test-database sc3_test_database

Grant all privileges to sc3_test_database for faultmonitor-user fmuser for host zc1-z1

Grant all privileges to sc3_test_database for faultmonitor-user fmuser for host zc1-z2

Flush all privileges

Mysql configuration for HA is done

zc1-z1 # kill -TERM `cat /services/mysql/mysqld.pid`

zc1-z1 # /opt/SUNWscmys/util/ha_mysql_register -f /services/mysql/cluster-config/ha_mysql_config sourcing /services/mysql/cluster-config/ha_mysql_config and create a working copy under /opt/SUNWscmys/util/ha_mysql_config.workRegistration of resource mysql-rs succeeded. remove the working copy /opt/SUNWscmys/util/ha_mysql_config.workzc1-z1 # clrs enable mysql-rs

Verify that the services-rg works on both nodes:

zc1-z1 # clrs status mysql-rs

=== Cluster Resources ===

Resource Name Node Name State Status Message------------- --------- ----- --------------mysql-rs zc1-z1 Online Online zc1-z2 Offline Offline

zc1-z1 # clrg switch -n zc1-z2 service-rgzc1-z1 # clrs status mysql-rs

=== Cluster Resources ===

Resource Name Node Name State Status Message------------- --------- ----- --------------mysql-rs zc1-z1 Offline Offline zc1-z2 Online Online

4 Solaris Cluster Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 39: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 39 / 42

4.9 HA Tomcat Configuration (zc1)

Install Patch 126072 on both nodes (s10-sc32-1 and s10-sc32-2):

vorlon# scp /data/SolarisCluster/126072-02.zip root@s10-sc32-1:/var/tmpvorlon# scp /data/SolarisCluster/126072-02.zip root@s10-sc32-2:/var/tmp

both-nodes# cd /var/tmpboth-nodes# unzip 126072-02.zipboth-nodes# patchadd 126072-02

Configure Tomcat on the node where the services-rg resource group is online:

zc1-z1 # clrg status service-rg

=== Cluster Resource Groups ===

Group Name Node Name Suspended Status---------- --------- --------- ------service-rg zc1-z1 No Online zc1-z2 No Offline

s10-sc32-1 # zfs create services/tomcatzc1-z1 # vi /services/tomcat/env.ksh#!/bin/kshCATALINA_HOME=/usr/apache/tomcat55CATALINA_BASE=/services/tomcatJAVA_HOME=/usr/javaexport CATALINA_HOME CATALINA_BASE JAVA_HOME

zc1-z1 # chown webservd:webservd /services/tomcat/env.ksh

zc1-z1 # cd /var/apache/tomcat55zc1-z1 # tar cpf - . | ( cd /services/tomcat ; tar xpf -)

zc1-z1 # cp /services/tomcat/conf/server-minimal.xml /services/tomcat/conf/server.xmlzc1-z1 # cd /services/tomcatzc1-z1 # mkdir cluster-configzc1-z1 # chown webservd:webservd cluster-configzc1-z1 # cd cluster-configzc1-z1 # cp /opt/SUNWsctomcat/util/sctomcat_config .zc1-z1 # cp /opt/SUNWsctomcat/bin/pfile .zc1-z1 # chown webservd:webservd pfilezc1-z1 # vi pfileEnvScript=/services/tomcat/env.kshUser=webservdBasepath=/usr/apache/tomcat55Host=s10-sc32-lh1Port=8080TestCmd="get /index.jsp"ReturnString="CATALINA"

4 Solaris Cluster Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 40: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 40 / 42

Startwait=20zc1-z1 # vi sctomcat_configRS=tomcat-rsRG=service-rgPORT=8080LH=service-lh-rsNETWORK=trueSCALABLE=falsePFILE=/services/tomcat/cluster-config/pfileHAS_RS=service-hasp-rsZONE=ZONE_BT=PROJECT=

zc1-z1 # /opt/SUNWsctomcat/util/sctomcat_register -f /services/tomcat/cluster-config/sctomcat_configsourcing /services/tomcat/cluster-config/sctomcat_config and create a working copy under /opt/SUNWsctomcat/util/sctomcat_config.workRegistration of resource tomcat-rs succeeded. remove the working copy /opt/SUNWsctomcat/util/sctomcat_config.work

zc1-z1 # clrs enable tomcat-rs

Verify that the services-rg works on both nodes:

zc1-z1 # clrs status tomcat-rs

=== Cluster Resources ===

Resource Name Node Name State Status Message------------- --------- ----- --------------tomcat-rs zc1-z1 Online Online zc1-z2 Offline Offline

zc1-z1 # clrg switch -n zc1-z2 service-rgzc1-z1 # clrs status tomcat-rs

=== Cluster Resources ===

Resource Name Node Name State Status Message------------- --------- ----- --------------tomcat-rs zc1-z1 Offline Offline zc1-z2 Online Online

Start firefox on vorlon and verify the tomcat page at http://s10-sc32-lh1:8080/.

4.10 Scalable Apache Configuration (zc2)

Create failover resource group for the shared address:

zc2-z1 # clrg create shared-ip-rg

4 Solaris Cluster Configuration

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work

Page 41: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Page 41 / 42

zc2-z1 # clrssa create -g shared-ip-rg -h s10-sc32-lh2 shared-ip-rszc2-z1 # clrg online -eM shared-ip-rg

Prepare the apache configuration file:

both-zones# cd /etc/apache2/both-zones# cp httpd.conf-example httpd.confboth-zones# vi httpd.conf--- httpd.conf-example Sat Jan 24 17:01:06 2009+++ httpd.conf Tue Oct 6 13:28:10 2009@@ -60,7 +60,7 @@ # <IfModule !mpm_winnt.c> <IfModule !mpm_netware.c>-#LockFile /var/apache2/logs/accept.lock+LockFile /var/apache2/logs/accept.lock </IfModule> </IfModule>@@ -84,7 +84,7 @@ # identification number when it starts. # <IfModule !mpm_netware.c>-PidFile /var/run/apache2/httpd.pid+PidFile /var/apache2/logs/httpd.pid </IfModule>

#@@ -343,7 +343,7 @@ # You will have to access it by its address anyway, and this will make # redirections work in a sensible way. #-ServerName 127.0.0.1+ServerName 10.0.2.131

# # UseCanonicalName: Determines how Apache constructs self-referencing

The default httpd.conf file uses /var/apache/2.2/htdocs as DocumentRoot.

Configure the scalable resource group for apache:

zc2-z1 # clrt register SUNW.apachezc2-z1 # clrg create -p Maximum_primaries=2 -p Desired_primaries=2 -p RG_dependencies=shared-ip-rg apache-rgzc2-z1 # clrs create -g apache-rg -t SUNW.apache -p Bin_dir=/usr/apache2/bin -p Resource_dependencies=shared-ip-rs -p Scalable=True -p Port_list=80/tcp apache-rszc2-z1 # clrg online -eM apache-rg

Start firefox on vorlon and open the demo URL at http://s10-sc32-lh2/scdemo/.

Default is a 1:1 weight for the nodes. You can change the weight to e.g. 4:3 by:

zc2-z1 # clrs set -p Load_balancing_weights=4@1,3@2 apache-rs

4 Solaris Cluster Configuration

Combining technologies to work Practicing Solaris Cluster usingVirtualBox

Page 42: Practicing Solaris Cluster using VirtualBox - amigagereazy.amigager.de/...PracticingSolarisClusterUsingVirtualBox-extern.pdf · Practicing Solaris Cluster using VirtualBox Example

Seite 42 / 42

A References

1. VirtualBox Download Page:http://www.virtualbox.org/wiki/Downloads

2. Solaris Cluster documentation:http://docs.sun.com/app/docs/prod/sun.cluster32#hic

3. Solaris Cluster Blog:http://blogs.sun.com/SC

4. Solaris OS Hardware Compatibility Lists:http://www.sun.com/bigadmin/hcl/

5. Toshiba OpenSolaris Laptops:http://www.opensolaris.com/toshibanotebook/index.html

6. Blueprint: Zone Clusters - How to deploy virtual clusters and why:https://www.sun.com/offers/details/820-7351.xml

7. Blueprint: Deploying Oracle Real Application Clusters (RAC) on Solaris Zone Clusters:https://www.sun.com/offers/details/820-7661.xml

8. Blueprint: High Availability MySQL Database Replication with Solaris Zone Cluster:https://www.sun.com/offers/details/820-7582.xml

A References

Practicing Solaris Cluster usingVirtualBox

Combining technologies to work