ibm storwize v7000 with powerha standard final1€¦ · ibm storwize v7000 with ibm powerha ... the...

36
© Copyright IBM Corporation, 2011. IBM Storwize V7000 with IBM PowerHA SystemMirror Proof of concept and configuration Zane Russell IBM Systems and Technology Group ISV Enablement March 2011

Upload: truongduong

Post on 13-Apr-2018

224 views

Category:

Documents


3 download

TRANSCRIPT

© Copyright IBM Corporation, 2011.

IBM Storwize V7000 with IBM PowerHA SystemMirror

Proof of concept and configuration

Zane Russell IBM Systems and Technology Group ISV Enablement

March 2011

IBM Storwize V7000 with IBM PowerHA SystemMirror

����������� ���������������������

���������������

Abstract .................................................................................................................................... 1 Introduction .............................................................................................................................. 1

High availability with PowerHA SystemMirror ..........................................................................................1 Storwize V7000 shared storage ...............................................................................................................2

Proof of concept solution ........................................................................................................ 3 Key assumptions...................................................................................................................... 4 IBM Storwize V7000.................................................................................................................. 5

Configuring shared volumes on Storwize V7000 .....................................................................................7 Creating hosts on the Storwize V7000 system..................................................................7 Mapping the fibre channel adapters ..................................................................................8 Creating volumes for application volume groups...............................................................8

Thin-provisioned volumes ......................................................................................................................10 IBM Easy Tier function ...........................................................................................................................11

Power Systems configuration ............................................................................................... 12 Installing PowerHA SystemMirror prerequisites.....................................................................................12 Installing PowerHA SystemMirror software............................................................................................12 LPAR disk storage configuration............................................................................................................13

Verifying that the VIOS partition recognizes the new devices .........................................13 Verifying or setting required attributes on the new VIOS hdisks .....................................13 Assigning the new VIOS hdisks to both nodes................................................................14 Creating volume groups, logical volumes, and application file systems..........................15 Importing volume groups onto the other node.................................................................19

Network planning for PowerHA high availability ................................................................. 20 High-availability applications ................................................................................................ 21

PowerHA cluster configuration...............................................................................................................22 Create the cluster and add nodes....................................................................................22 Create cluster networks ...................................................................................................24 Create resources and resource groups ...........................................................................27 Verify and synchronize.....................................................................................................29

Testing the two-node cluster ..................................................................................................................30 Summary................................................................................................................................. 31 Resources............................................................................................................................... 32 About the author .................................................................................................................... 32 Trademarks and special notices ........................................................................................... 33

IBM Storwize V7000 with IBM PowerHA SystemMirror

1

Abstract The proof of concept solution described in this paper is designed to illustrate how IBM Storwize V7000 coupled with IBM PowerHA SystemMirror for AIX can help you meet demanding enterprise high availability (HA) requirements. With PowerHA SystemMirror providing the ability to automatically detect server resource failures, then move an application from the failed server image to a functioning IBM Power system in the cluster, and Storwize V7000 providing shared concurrent volume functionality, the system administrator of the environment has powerful tools to avoid or minimize planned and unplanned outages.

Introduction Disruption in service to a business-critical application might lead to customer dissatisfaction, lost productivity, and lost revenue, which drives a business to require stringent service level agreements (SLAs). IBM® PowerHA® SystemMirror for AIX can be combined with the IBM Storwize V7000 disk system, to address high-availability and fault-tolerance requirements that are required by today’s business critical applications.

The IBM Storwize V7000 disk system targets medium businesses to large enterprise users who seek enterprise-class storage efficiency and ease of management, combined with support for moving volumes between storage pools without causing disruption to running applications.

IBM PowerHA SystemMirror (PowerHA) is the IBM premier high-availability solution for Power systems running IBM AIX®.

PowerHA requires the use of shared SAN storage for application data volumes, and due to its virtualization capabilities and the easy-to-use graphical user interface (GUI), the Storwize V7000 system allows customers to integrate the new storage system very quickly into a high-availability environment.

High availability with PowerHA SystemMirror

In today’s complex environments, providing continuous service for applications is a key component of a successful IT implementation. High availability is one of the components that contribute to providing continuous service for the application clients, by masking or eliminating both planned and unplanned outages. A high availability solution ensures that the failure of any component of the solution, whether hardware, software, or system management, does not cause the application and its data to become permanently unavailable to the end user.

PowerHA also provides the multiprocessing component, managing multiple hardware and software resources to provide complex application functionality and better resource utilization. Taking advantage of the multiprocessing component depends on careful planning and design in the early stages of implementation to efficiently use all resources. Those resources include shared storage that is robust and easy to manage.

A high availability solution based on PowerHA provides automated failure detection, diagnosis, application recovery, and node reintegration.

PowerHA SystemMirror is the latest version of IBM high-availability software that in the past has been known as PowerHA and High Availability Cluster Multi-Processing or HACMP. This paper uses all these names interchangeably.

IBM Storwize V7000 with IBM PowerHA SystemMirror

2

Note: The term fallover is used in some IBM PowerHA documentation. This paper uses the equivalent term failover to describe the movement of a resource group from an active node to another node in response to a failure.

The following PowerHA cluster topology components describe the server infrastructure that is involved in the cluster, and the networking used to communicate between nodes.

• Nodes – IBM Power Systems™ server image or logical partition (LPAR) running IBM AIX® and PowerHA.

• Cluster – A group of nodes providing redundant resources in case of failure, identified by a cluster name

• Networks – IP and non-IP communication networks • Communication interfaces and devices • Persistent node IP labels / addresses – An IP address that does not failover to other nodes

The following PowerHA resource components describe server resources that will be made highly available:

• Service IP label / address – The IP address used by end users to access the application • Application server – An application instance • Volume groups – Groups of application volumes on shared disk storage, such as Storwize V7000 • Resource groups – A collection of resources necessary to successfully run an application on a

server image

Storwize V7000 shared storage

The IBM Storwize V7000 ease of management and enterprise-class features make it an excellent choice to provide shared storage to business-critical applications in mid-range environments that must remain highly available.

Combining the Storwize V7000 system, PowerHA SystemMirror, and leading ISV applications can provide increased flexibility and deliver a more robust storage platform than traditional RAID arrays in an HA environment. Solutions have been qualified for the Storwize V7000 system for select applications that focus on key solution areas including backup / restore, disaster recovery, clustering, server virtualization, and database and performance optimization. IBM is also committed to certifications with key ISVs aligned with various industries including healthcare, financial services, telecommunications, and the public sector.

Fault tolerance and high levels of availability in the Storwize V7000 system are achieved by:

• The RAID capabilities of the underlying disk subsystems • IBM Storwize V7000 nodes clustering using Compass architecture • Autorestart of hung nodes • Uninterruptible power supply units to provide memory protection in the event of a site power

failure • Host System Failover capabilities

One of the primary advantages of the Storwize V7000 disk system is its storage virtualization features. Storage virtualization allows an organization to implement pools of storage across physically separate disk systems (which can even be from different vendors). Storage can then be deployed from these pools and can be migrated between pools without any outage to the attached host systems.

IBM Storwize V7000 with IBM PowerHA SystemMirror

3

Storage virtualization yields numerous benefits for storage administration and management, including: • Combining storage capacity from multiple heterogeneous disk systems into a single reservoir that

can be managed as a business resource rather than as separate boxes • Increasing storage utilization by providing host applications with more flexible access to capacity • Improving productivity of storage administrators by enabling management of heterogeneous

storage systems by using a common interface • Improving application availability by insulating host applications from changes to the underlying

physical storage infrastructure • Enabling a tiered-storage environment where the cost of storage can be matched to the value of

data, and easily migrating between those tiers

Proof of concept solution The configuration instructions included for the two node proof of concept test were based on the following components:

• Two IBM Power® 790 servers • AIX 6.1.0 • IBM PowerVM™ Standard Edition 6.1 • PowerHA Standard Edition 6.1 • Storwize V7000 with 300 GB 10 K disk drives configured in a RAID5 storage pool • Hardware Management Console (HMC) V7R7.1.0.2 • Gb Ethernet LAN • Fibre Channel (FC) SAN using IBM 2498-B24 Fibre Channel switches

Note: The proof of concept testing for Storwize V7000 with PowerHA was conducted with a single physical IP interface. In a production environment, it would be considered best practice to provide redundant physical IP interfaces for each IP network.

IBM Storwize V7000 with IBM PowerHA SystemMirror

4

Figure 1: Two-node test environment

Key assumptions This solution description is intended for IT professionals with some familiarity and experience in the following areas:

• AIX configuration • Virtualization on Power Systems using a Virtual I/O Server (VIOS) partition • Fibre Channel SAN configuration • SAN-attached storage configuration • HMC use for AIX LPAR configuration

This proof of concept solution is intended to illustrate the necessary steps, options, and ease of implementation when using Storwize V7000 to set up a PowerHA environment. To avoid confusion and limit the scope of the project, it is assumed that an IBM Power LPAR server environment has been

IBM Storwize V7000 with IBM PowerHA SystemMirror

5

configured with Storwize V7000 SAN-attached storage, as described in the following points, and in Figure 1:

• A pair of IBM POWER6® or IBM POWER7® systems are configured, each with at least a single VIOS partition at release level 1.5.1.1 or higher.

• All nodes are configured with AIX 6.1 (or higher). • All nodes to be included in the PowerHA cluster have adequate LAN bandwidth to perform

failover operations. • Each node in the cluster has the operating resources to provide adequate performance for all

applications that may run concurrently. • A Storwize V7000 disk system with arrays and storage pools configured, attached, and zoned to

potential nodes using Fibre Channel SAN switches and Fibre Channel host bus adapters (HBAs).

IBM Storwize V7000 The IBM Storwize V7000 solution provides a modular storage system that includes the capability to virtualize external SAN-attached storage as well as its own internal storage. The IBM Storwize V7000 solution is built upon the IBM SAN Volume Controller technology base and utilizes technology from the IBM System Storage® DS8000® family.

Included with the IBM Storwize V7000 system is a simple and easy-to-use GUI which is designed to allow storage to be deployed quickly and efficiently. The GUI runs through a web browser window on the IBM Storwize V7000 system so there is no need for a separate console.

IBM Storwize V7000 with IBM PowerHA SystemMirror

6

Figure 2: View of the Storwize V7000 web-based GUI

When virtualizing external storage arrays, IBM Storwize V7000 can provide up to 32 PB of usable capacity. IBM Storwize V7000 supports a range of external disk systems similar to what the SAN Volume Controller supports today.

IBM Storwize V7000 enclosures currently support solid-state drive (SSD), serial-attached SCSI (SAS), and nearline SAS drive types, with each enclosure holding 12 x 3.5 inch (see Figure 3) or 24 x 2.5 inch drives, depending on type. The Storwize V7000 solution consists of a control enclosure and up to nine expansion enclosures, with support for intermixing 3.5 inch and 2.5 inch type enclosures. Within each enclosure, there are two canisters, which may be either node canisters or expansion canisters.

Figure 3: Storwize V7000 disk system

The Storwize V7000 system includes flexible host-connectivity options with support for 8 GB Fibre Channel or 1 GB iSCSI connections.

In addition, there is also a full array of advanced software features including:

• Seamless data migration • Thin provisioning • Volume mirroring

IBM Storwize V7000 with IBM PowerHA SystemMirror

7

• Global Mirror and Metro Mirror replication • FlashCopy – 256 targets, cascaded, incremental, space efficient (thin provisioned) • Integration with IBM Tivoli Productivity Center • IBM Easy Tier™ – Provides a mechanism to seamlessly migrate hot spots to a higher performing

storage pool within the IBM Storwize V7000 solution.

Configuring shared volumes on Storwize V7000

Configuring the storage for use with PowerHA, using the web-based GUI on the Storwize V7000, is intuitive and straightforward.

This paper assumes that you have a Storwize V7000 disk system with arrays, and storage pools already configured. For help in getting to this step, or when you need additional guidance in the following steps, refer to Overview of the IBM Storwize V7000 from IBM Redbooks® at: ibm.com/redbooks/redpieces/pdfs/sg247938.pdf

The steps to configure Storwize V7000 volumes for use in a PowerHA cluster include:

• Creating hosts on the Storwize V7000 system. • Mapping the Fibre Channel adapters on the Power systems to these hosts using the worldwide

port names (WWPNs). • Creating volumes for any application volume groups you need in your PowerHA resource groups.

Creating hosts on the Storwize V7000 system

To create hosts on the Storwize V7000 system:

1. Access the Storwize V7000 GUI with superuser authority. From the main GUI menu, double-click hosts, then click New Host, and then click Fibre-Channel Host.

Figure 4: Creating a new host

2. Enter a host name for the first Power system LPAR that will be part of your PowerHA cluster.

IBM Storwize V7000 with IBM PowerHA SystemMirror

8

Mapping the fibre channel adapters

To map the fibre channel adapters on the Power systems to the hosts using WWPNs:

1. Select Fibre-Channel Ports from the drop-down list and click Add Port to List. Then, click Create Host.

Figure 5: Adding host ports to the IBM Storwize V7000

2. Repeat these steps to create a host and map ports to the second Power system in your cluster.

Creating volumes for application volume groups

To create volumes you need for application volume groups in your PowerHA resource groups:

1. From the main GUI menu, double-click Volumes, click New Volume, select Generic, and click the storage pool from which you want to create the volume. Enter the volume name and size of the volume you want to create. Click Create and Map to Host.

IBM Storwize V7000 with IBM PowerHA SystemMirror

9

Figure 6: Creating a shared volume

2. After the volume is created, click Continue, and from the Host drop-down list, select the first cluster LPAR host name for that application volume. Make a note of the volume’s UID for future reference. After ensuring that the volume and host assignment are correct, click Apply.

Figure 7: Assigning the volume to the first host

3. Repeat these steps to create additional application volumes needed for all applications to run on your two-node cluster. All application disk volumes must be shared (on shared storage systems such as Storwize V7000), and accessed through the Fibre Channel adapters assigned through the VIOS partitions to the cluster LPARs.

4. To assign the application volumes to the second LPAR in the cluster (both nodes must be able to access each volume), from the main GUI menu, double-click the hosts icon, select the second cluster LPAR host name, and from the Actions drop-down list, select Modify Mappings. Map all

IBM Storwize V7000 with IBM PowerHA SystemMirror

10

of the volumes you created for the first cluster LPAR using the right arrow button, and click Apply.

Figure 8: Mapping mobile partition volumes to destination host

That completes the steps necessary on the Storwize V7000 to set up disk storage for a two-node PowerHA cluster.

Thin-provisioned volumes

Although not required for setting up a PowerHA cluster, thin-provisioning functionality is a powerful and popular advantage of implementing Storwize V7000 in your environment.

The thin-provisioning feature of Storwize V7000 uses physical-storage capacity only when data is written to the virtual disks — instead of dedicating physical capacity, provisioning storage as the business grows.

As thin provisioning is implemented entirely on the Storwize V7000 system, without requiring any special configuration on the servers, using thin-provisioned volumes for use in a PowerHA cluster will be transparent to the administrator. PowerHA accesses the same thin-provisioned volumes used on the primary node, when an outage causes the application to be run on the failover node.

To implement thin-provisioned volumes to use as data volumes in your PowerHA cluster, simply select Thin Provision instead of Generic during the volume creation process.

IBM Storwize V7000 with IBM PowerHA SystemMirror

11

Figure 9: Choosing thin-provisioned volumes

IBM Easy Tier function

Another advanced capability of the Storwize V7000 system is the Easy Tier function. When solid-state drives are configured as part of the Storwize V7000 solution, the Easy Tier function provides a mechanism to seamlessly migrate SAS-drive hot spots to the high performance solid-state disk storage pool to optimize use of this premium resource.

Easy Tier will operate independently of AIX or the application, and therefore, no operating system or application tuning or policy setting is necessary to implement it.

The Easy Tier function is included with the IBM Storwize V7000 system and is not a purchased feature. If this capability is to be used to optimize the utilization of solid state drives, then this function needs to be enabled.

IBM Storwize V7000 with IBM PowerHA SystemMirror

12

Power Systems configuration This section describes the steps necessary to install and configure the Power systems to prepare for a PowerHA cluster.

Installing PowerHA SystemMirror prerequisites

The following filesets are prerequisite for PowerHA.

• bos.adt.lib • bos.adt.libm • bos.adt.syscalls • bos.rte.SRC • bos.rte.libc • bos.rte.libcfg • bos.rte.libcur • bos.rte.libpthreads • bos.rte.odm • bos.rte.lvm. • bos.data • rsct.compat.basic.hacmp • rsct.compat.clients.hacmp • bos.net.tcp.client • bos.net.tcp.server • bos.clvm.enh

Other filesets may be requisite, depending on what is already installed on your server image. Installing bos.net.tcp filesets required the following filesets on the project test environment in our lab.

• rsct.core • rsct.basic • bos.mp64 • bos.sysmgt.serv_aid • devices.chrp.base.rte

You may need the following filesets as well, depending on whether Network File Systems (NFS) or encryption is used on the node.

• bos.net.nfs.server • bos.net.nfs.client • rsct.crypt

Installing PowerHA SystemMirror software

You may install the PowerHA SystemMirror software from the media or using a NIM server, using smit (smitty install_all) or from the command line (installp). Be sure to set the option to accept licensing agreements.

IBM Storwize V7000 with IBM PowerHA SystemMirror

13

If the smit install output indicates failure, check the output to see if all portions of the software you need were successfully installed at the end of the listing. On the test servers, filesets for cluster.hativoli failed, but all necessary filesets were installed successfully.

LPAR disk storage configuration

This section of the paper describes configuring the Storwize V7000 volumes into enhanced concurrent volumes that can be accessed by either node in the cluster. The volumes should be zoned and masked on the SAN to the VIOS partition on each system. The cluster nodes (client LPARs) in the proof of concept test solution access the application volumes through virtual SCSI.

As shared storage has been configured in the form of Storwize V7000 volumes, you now need to:

• Verify that the VIOS partition recognizes the new devices • Verify or set required attributes on the new VIOS hdisks • Assign the new VIOS hdisks to both nodes (client LPARs) • Create volume groups, logical volumes, and file systems for the application volumes • Import volume groups to the other LPAR in the cluster

Verifying that the VIOS partition recognizes the new devices

To verify that the VIOS partition recognizes the new devices:

1. Log in as padmin on the VIOS partition on the first POWER system. 2. Issue the cfgdev command to detect new devices. 3. Issue the lsdev –type disk command and look for the new hdisk devices that correspond to the

new Storwize V7000 volume(s). $ cfgdev $ lsdev -type disk |grep hdisk hdisk0 Available SAS Disk Drive hdisk1 Available SAS Disk Drive : : : hdisk10 Available MPIO IBM 2076 FC Disk hdisk11 Available MPIO IBM 2076 FC Disk hdisk12 Available MPIO IBM 2076 FC Disk hdisk13 Available MPIO IBM 2076 FC Disk hdisk14 Available MPIO IBM 2076 FC Disk hdisk15 Available MPIO IBM 2076 FC Disk hdisk16 Available MPIO IBM 2076 FC Disk hdisk17 Available MPIO IBM 2076 FC Disk hdisk18 Available MPIO IBM 2076 FC Disk $

Verifying or setting required attributes on the new VIOS hdisks

To verify or set the required attributes on the new VIOS hdisks:

1. Set the reserve_policy attribute on the new disk to no_reserve. Use the lsdev command to check the attribute, and chdev command to set the attribute as necessary.

$ lsdev -dev hdisk11 -attr |grep reserve_policy reserve_policy single_path $ chdev -dev hdisk11 -attr reserve_policy=no_reserve -perm

IBM Storwize V7000 with IBM PowerHA SystemMirror

14

hdisk11 changed $ lsdev -dev hdisk11 -attr |grep reserve_policy reserve_policy no_reserve

2. Use the chkdev command to verify that there is a unique identifier set for the disk. You should recognize the UID as part of the Storwize V7000 ID.

$ chkdev -dev hdisk11 NAME: hdisk11 IDENTIFIER: 3321360050768018103ABF00000000000003704214503IBMfcp PHYS2VIRT_CAPABLE: NA VIRT2NPIV_CAPABLE: YES VIRT2PHYS_CAPABLE: YES

3. Use the lspv command to check that the new disk has a PVID. If it does not, create it using the chdev command.

$ lspv NAME PVID VG STATUS hdisk0 00f62a6b83516383 rootvg active hdisk1 00f62a6b879c948c viosvg active hdisk10 00f62a6b99e90e7b None hdisk11 none None hdisk12 none None hdisk13 none None hdisk14 none None hdisk15 none None hdisk16 none None hdisk17 none None hdisk18 none None $ chdev -dev hdisk11 -attr pv=yes -perm hdisk11 changed $ lspv NAME PVID VG STATUS hdisk0 00f62a6b83516383 rootvg active hdisk1 00f62a6b879c948c viosvg active hdisk10 00f62a6b99e90e7b None hdisk11 00f62a6bc488aeda None hdisk12 none None hdisk13 none None hdisk14 none None hdisk15 none None hdisk16 none None hdisk17 none None hdisk18 none None

Assigning the new VIOS hdisks to both nodes

To assign the new VIOS hdisks to both nodes:

1. Use the mkvdev command to assign the hdisks to the vhost adapter corresponding to the node. 2. Use the lsmap command to verify that it is assigned correctly.

IBM Storwize V7000 with IBM PowerHA SystemMirror

15

$ mkvdev -f -vdev hdisk11 -vadapter vhost1 –dev vtha14 vtha14 Available $ lsmap -vadapter vhost1 SVSA Physloc Client Partition ID --------------- -------------------------------------------- ------------------ vhost1 U8233.E8B.062A6BP-V2-C3 0x00000003 VTD vtha14 Status Available LUN 0x8400000000000000 Backing device hdisk11 Physloc U78A0.001.DNWHZ3D-P1-C3-T1-W5005076801207CBE-L100000000000 Mirrored false VTD vtopt0 Status Available LUN 0x8300000000000000 Backing device cd0 Physloc U78A0.001.DNWHZ3D-P2-D2 Mirrored N/A VTD vtscsi2 Status Available LUN 0x8200000000000000 Backing device hdisk10 Physloc U78A0.001.DNWHZ3D-P1-C3-T2-W500507680130757E-L0 Mirrored false

3. Repeat steps 1 and 2 for each of the Storwize V7000 application volumes on both nodes. Be sure

to use the PVID (output in lspv command) to verify that you are working with the correct volume.

Creating volume groups, logical volumes, and application file systems

To create volume groups, logical volumes, and application file systems:

Run the following steps on the PowerHA nodes (client LPARs).

1. Log in to the first node with root authority. 2. Use the cfgmgr and lspv commands to recognize the new disks on the node. isvp14_ora> cfgmgr isvp14_ora> lspv hdisk0 00f62a6b99e90e7b rootvg active hdisk1 00f62a6bc4886efe None hdisk2 00f62a6bc488aeda None

3. Set the PATH variable to include HACMP cluster executables, from the command line and in .profile.

isvp14_ora> export PATH=$PATH:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities

4. Use the lvlstmajor command to identify an available major number for the volume group. isvp14_ora> lvlstmajor 34...

IBM Storwize V7000 with IBM PowerHA SystemMirror

16

5. Create a volume group with enhanced concurrent capability, using the new shared disk. This volume group may be extended later if multiple volumes are needed for the application. Use an available major number from the output of the lvlstmajor command.

isvp14_ora> smitty mkvg

�6. Assuming that the application volumes need to have file systems created on them, you must

create both a logical volume for the data and a logical volume for journaled file system (JFS) logs. Vary on the new volume group, and then create a logical volume for file system logs in the new volume group. As an example, for our testing, we set the logical volume name to ha14loglv, selected 1 for the number of logical partitions, selected jfslog as the logical volume type, set scheduling to sequential, and retained the default settings for the other options.

isvp14_ora> varyonvg ha14vg isvp14_ora> smitty mklv

Figure 10: Creating the volume group�

IBM Storwize V7000 with IBM PowerHA SystemMirror

17

�Figure 11: Creating the logical volume for jfs logs

7. Initialize the logical volume for jfs logs using the logform command.

isvp14_ora> logform /dev/ha14loglv

8. Create the logical volume with type set to jfs.

isvp14_ora> smitty mklv

Figure 12: Creating the logical volume

IBM Storwize V7000 with IBM PowerHA SystemMirror

18

9. Create the file system on the new logical volume. #smitty crfs ->Add a Journaled File System -> Add a Journaled File System on a Previously Defined Logical Volume -> Add a Standard Journaled File System

Figure 13: Creating the jfs file system

10. Vary on the volume group, mount the file system, and verify that it looks correct. Then unmount the file system and vary off the volume group.

isvp14_ora> mount /ha14fs isvp14_ora> df -k Filesystem 1024-blocks Free %Used Iused %Iused Mounted on /dev/hd4 524288 337696 36% 13343 15% / /dev/hd2 2097152 22308 99% 46436 84% /usr /dev/hd9var 524288 50340 91% 9235 42% /var /dev/hd3 393216 388136 2% 61 1% /tmp /dev/hd1 262144 260452 1% 28 1% /home /dev/hd11admin 131072 130708 1% 5 1% /admin /proc - - - - - /proc /dev/hd10opt 393216 188716 53% 8696 17% /opt /dev/livedump 262144 261776 1% 4 1% /var/adm/ras/livedump vanhalen:/vanhalen/tools 524288 498108 5% 541 1% /testlab/ts /dev/ha14lv 9830400 9521756 4% 20 1% /ha14fs isvp14_ora> umount /ha14fs isvp14_ora> varyoffvg ha14vg

11. If there are additional volumes in that volume group, repeat the process for those volumes.

IBM Storwize V7000 with IBM PowerHA SystemMirror

19

Importing volume groups onto the other node

To import volume groups onto the other node

1. Log in to the second node with root authority. 2. Run the cfgmgr command to recognize the new disks. 3. Import the volume group and disk you created on the first node, using the PVID to identify the

correct disk, and using the same major number. isvp15_ora> cfgmgr isvp15_ora> lspv hdisk0 00f62a6c9a389cf8 rootvg active hdisk1 00f62a6bc4886efe None hdisk2 00f62a6bc488aeda None isvp15_ora> importvg -V 34 -y ha14vg hdisk2 ha14vg 0516-783 importvg: This imported volume group is concurrent capable. Therefore, the volume group must be varied on manually.

4. Vary on the volume group, mount the file system, and verify that it looks correct. Then unmount

the file system and vary off the volume group. isvp15_ora> varyonvg ha14vg isvp15_ora> mount /ha14fs isvp15_ora> df -k Filesystem 1024-blocks Free %Used Iused %Iused Mounted on /dev/hd4 524288 337696 36% 13343 15% / /dev/hd2 2097152 22308 99% 46436 84% /usr /dev/hd9var 524288 50340 91% 9235 42% /var /dev/hd3 393216 388136 2% 61 1% /tmp /dev/hd1 262144 260452 1% 28 1% /home /dev/hd11admin 131072 130708 1% 5 1% /admin /proc - - - - - /proc /dev/hd10opt 393216 188716 53% 8696 17% /opt /dev/ha14lv 9830400 9521756 4% 20 1% /ha14fs isvp15_ora> umount /ha14fs isvp15_ora> varyoffvg ha14vg

5. Repeat the process to create all volume groups, logical volumes, and file systems needed by the

applications that are to be highly available. If you have created the volume group and extended the volume group by adding another shared disk, you can issue the importvg command with the –L option to update the other node with the new disk (remembering to verify which disk with pvid).

isvp15_ora> importvg -L ha14vg hdisk4 ha14vg

�6. Choose a shared disk that is set up to be accessed by both nodes to configure for a disk-based

heartbeat. To verify that both nodes can access the disk correctly, run the dhb_read command on the first node in receive mode, and while it is still running (scrolling text Magic number =), issue the command on the second node in transmit mode. If the shared disk access is working correctly, the processes will complete and display the text Link operating normally.

IBM Storwize V7000 with IBM PowerHA SystemMirror

20

isvp14_ora> /usr/sbin/rsct/bin/dhb_read -p hdisk2 -r DHB CLASSIC MODE First node byte offset: 61440 Second node byte offset: 62976 Handshaking byte offset: 65024 Test byte offset: 64512 Receive Mode: Waiting for response . . . Magic number = 0x87654321 Magic number = 0x87654321 Magic number = 0x87654321 : : : Magic number = 0x87654321 Link operating normally isvp15_ora> /usr/sbin/rsct/bin/dhb_read -p hdisk2 -t DHB CLASSIC MODE First node byte offset: 61440 Second node byte offset: 62976 Handshaking byte offset: 65024 Test byte offset: 64512 Transmit Mode: Magic number = 0x87654321 Detected remote utility in receive mode. Waiting for response . . . Magic number = 0x87654321 Magic number = 0x87654321 Link operating normally

Network planning for PowerHA high availability PowerHA uses networks to detect and diagnose failures, as well as for inter-node communication, and providing clients with highly-available access to applications.

PowerHA uses IP networks like Ethernet, and non-IP networks, such as storage area networks or RS-232 serial networks, to pass heartbeat packets to the other cluster nodes to signal that it is up and running. The Reliable Scalable Clustering Technology (RSCT) daemon detects the loss of heartbeat packets that are sent across all the networks, which allows it to determine if the failure is limited to just a single network interface card (NIC), a network, or the entire node.

It is very important to avoid a situation where communication is lost between the node server images, where each believes that it is the only one still online, which is known as a split-brain, partitioned, or node-isolation condition. If multiple nodes independently write to shared or replicated data volumes, it could result in data corruption or inconsistent data. This is why multiple heartbeat paths over multiple networks are highly recommended.

Note: The proof of concept testing for Storwize V7000 with PowerHA was conducted with a single physical IP interface. In a production environment, it would be considered best practice to provide redundant physical IP interfaces for each IP network.

The basic components of typical PowerHA network plan include:

• Service IP label / address: The address used by clients to access an application. It is kept highly available by HACMP.

• Persistent IP label / address: A node bound IP alias that is managed by PowerHA. The persistent alias never moves to another node.

IBM Storwize V7000 with IBM PowerHA SystemMirror

21

• Communication interface: A physical interface that supports the TCP / IP protocol, for example an Ethernet adapter, which may be named en0. It is represented by its boot-time or base IP label.

• Communication device: A physical device representing an end of a point-to-point non-IP network, for example, a shared disk device for a disk-based heartbeat.

Configure all IP networks and interfaces to AIX and ping test between nodes before beginning the PowerHA configuration. Create a list of all IP addresses for all nodes in a master /etc/hosts file, and then propagate that hosts file to all nodes in the cluster.

In some cases, especially when using a virtual Ethernet adapter mapped to multiple adapters in a VIOS partition, you may find the need to configure PowerHA with a single IP network. In this case, list other non-cluster server IP addressed in the file /usr/es/sbin/cluster/netmon.cf. PowerHA will use these outside servers to help determine if a failure to communicate with a cluster node is a network issue, or a server failure.

isvp14_ora> cat /usr/es/sbin/cluster/netmon.cf 9.11.194.163 9.11.83.11 9.11.83.12 9.11.83.13 9.11.83.14

Network planning for a PowerHA cluster is important and complex, and beyond the scope of this proof of concept test. For detailed PowerHA networking information refer to section 3.8 of the PowerHA for AIX Cookbook, in IBM Redbooks: ibm.com/redbooks/pdfs/sg247739.pdf

High-availability applications The primary goal of PowerHA SystemMirror is to provide one or more alternative server images that have the necessary resources and software to operate a business critical-application, so that if a critical server resource fails, the other server takes over. Therefore, throughout the planning phase for PowerHA, keep in mind that each server in a cluster needs to be prepared to take over responsibility for running any designated application in the cluster. After your cluster shared storage has been defined and is accessible by each node, it is a good practice to vary on the volume groups onto each node manually, and test that the applications have all necessary components, before going forward with the PowerHA cluster configuration steps.

PowerHA refers to an application that can be managed in a cluster, as an application server. Application servers are defined to PowerHA and associated to a resource group, with the following components:

• Start script – An AIX-executable script that can start the application without operator intervention, even after an unexpected shutdown of the server. The exit code should be zero if the script executes successfully. The full path name of the script must be the same on all nodes, and the script should be located in a rootvg file system so that it is accessible before application volume groups are varied on. Use set –x within the script to send output to the hacmp.out log file.

• Stop script – An AIX-executable script that can stop the application without operator intervention. The exit code should be zero if the script executes successfully. Use set –x within the script to send output to the hacmp.out log file.

IBM Storwize V7000 with IBM PowerHA SystemMirror

22

• Application monitors – PowerHA will either monitor that a particular process is running, as output from ps –el, or will execute an AIX script and check that the exit code is zero, to determine that the application is still operating normally. Application monitors are optional.

Most popular software packages provide start and stop scripts appropriate for application server definition.

For more information about preparing applications for PowerHA, refer to PowerHA for AIX Cookbook, section 7.7.7 from IBM Redbooks at: ibm.com/redbooks/redbooks/pdfs/sg247739.pdf

PowerHA cluster configuration

The following steps describe defining the PowerHA two-node cluster and its components using smit menus:

• Create a PowerHA cluster, and specify the node names in the cluster. • Create network and network interfaces and devices for the cluster. • Create resources and resource groups associated with each application. • Verify and synchronize the new cluster

Create the cluster and add nodes

To create the cluster and add nodes:

1. Define a new cluster to PowerHA and assign it a name. isvp14_ora> smitty hacmp --> Extended Configuration --> Extended Topology Configuration --> Configure an HACMP Cluster --> Add/Change/Show an HACMP Cluster

Figure 14: Creating a cluster

2. Add a node to the cluster using the LPAR hostname.

IBM Storwize V7000 with IBM PowerHA SystemMirror

23

isvp14_ora> smitty hacmp --> Extended Configuration --> Extended Topology Configuration --> Configure HACMP Nodes --> Add a Node to the HACMP Cluster

�Figure 15: Adding a node to the cluster

3. Add the second node to the cluster using the same smit screen.

At any point after configuring the nodes, you can optionally run a process to discover HACMP-related information, which may help in completing the configuration.

isvp14_ora> smitty hacmp --> Extended Configuration --> Discover HACMP-related Information from configured nodes

Figure 16: Discovering HACMP-related information

IBM Storwize V7000 with IBM PowerHA SystemMirror

24

Create cluster networks

To create cluster networks:

1. Add an IP Ethernet network, and a non-IP serial diskhb network to the cluster. You will add the IP interfaces and disk devices in the following steps.

isvp14_ora> smitty hacmp --> Extended Configuration --> Extended Topology Configuration --> Configure HACMP Networks --> Add a Network to the HACMP Cluster --> select ether

Figure 17: Adding an IP network

isvp14_ora> smitty hacmp --> Extended Configuration --> Extended Topology Configuration --> Configure HACMP Networks --> Add a Network to the HACMP Cluster --> select diskhb

IBM Storwize V7000 with IBM PowerHA SystemMirror

25

Figure 18: Adding a diskhb network

2. Add the IP interfaces to the IP network. Add the public address for en0, and then any private addresses using the same smit screen. Do this for both nodes.

isvp14_ora> smitty hacmp --> Extended Configuration --> Extended Topology Configuration --> Configure HACMP Communication Interfaces/Devices --> Add Communication Interfaces/Devices --> Add Pre-defined Communication Interfaces and Devices --> Communication Interfaces --> select the IP network

Figure 19: Adding an IP interface

IBM Storwize V7000 with IBM PowerHA SystemMirror

26

3. Add the serial diskhb devices (pair) for disk heartbeating. Add the shared disk device for the first node, and then add the same shared disk device as it is referenced on the second node.

isvp14_ora> smitty hacmp --> Extended Configuration --> Extended Topology Configuration --> Configure HACMP Communication Interfaces/Devices --> Add Communication Devices

Figure 20: Adding a diskhb device

4. Add a persistent IP address for each node. isvp14_ora> smitty hacmp --> Extended Configuration --> Extended Topology Configuration--> Configure HACMP Persistent Node IP Label/Addresses

Figure 21: Adding persistent IP address

IBM Storwize V7000 with IBM PowerHA SystemMirror

27

Create resources and resource groups

To create resources and resource groups:

1. Create an application server resource. The server name field is just a tag to reference this application, not a hostname field.

isvp14_ora> smitty hacmp --> Extended Configuration --> Extended Resource Configuration --> HACMP Extended Resources Configuration --> Configure HACMP Application Servers --> Add an Application Server

Figure 22: Adding application server resource

2. Add the service IP address resource. The service IP should be the address that users have set up to access the application. As the service IP will need to move to a failover node, it needs to be part of the resource group.

isvp14_ora> smitty hacmp --> Extended Configuration --> Extended Resource Configuration --> HACMP Extended Resources Configuration --> Configure HACMP Service IP Labels/Addresses --> Add a Service IP Label/Address --> select Configurable on Multiple Nodes --> select the IP network

IBM Storwize V7000 with IBM PowerHA SystemMirror

28

�Figure 23: Adding a service IP address resource

3. Create a resource group for each group of applications and associated resources that will run on the cluster, and define the policies. Resources will be added to the groups in the next step. In many cases, a single resource group will be defined for all highly available resources on the node.

isvp14_ora> smitty hacmp --> Extended Configuration --> HACMP Extended Resource Group Configuration --> Add a Resource Group

Figure 24: Creating a resource group

4. Add the volume groups, application servers, and service IPs to the appropriate resource group.

IBM Storwize V7000 with IBM PowerHA SystemMirror

29

isvp14_ora> smitty hacmp --> Extended Configuration --> HACMP Extended Resource Group Configuration --> Change/Show Resources and Attributes for a Resource Group

Figure 25: Adding resources to a resource group

Verify and synchronize After you have completely defined the cluster, you must verify it, and synchronize (update) it on the other node. These functions are completed in a single step. isvp14_ora> smitty hacmp --> Extended Configuration --> Extended Verification and Synchronization

Figure 26: Verifying and synchronizing the cluster

IBM Storwize V7000 with IBM PowerHA SystemMirror

30

Testing the two-node cluster

You can manually test moving a resource group from one node to another node. First, start the cluster services.

isvp14_ora> smitty hacmp --> System Management (C-SPOC) --> HACMP Services --> Start Cluster Services

Figure 27: Starting PowerHA services

After PowerHA services are started successfully, you can attempt to manually failover a resource group to the other node.

isvp14_ora> smitty hacmp --> System Management (C-SPOC) --> Resource Group and Applications --> Move a Resource Group to Another Node/Site

The cluster test tool provided with PowerHA can be very useful in automatically simulating the failure of particular resources or the failure of an entire node, but is not valid in all configurations.

isvp14_ora> smitty hacmp --> Extended Configuration --> HACMP Cluster Test Tool

For detailed information about the cluster test tool, refer to section 6.5 of the PowerHA for AIX Cookbook, in IBM Redbooks: ibm.com/redbooks/pdfs/sg247739.pdf

IBM Storwize V7000 with IBM PowerHA SystemMirror

31

Summary Business-critical application SLAs drive requirements for high availability, fault tolerance, and disaster recovery. Power HA SystemMirror ensures that the failure of any component of the solution, whether hardware, software, or system management, does not cause the application and its data to become permanently unavailable to the end user.

PowerHA requires the use of shared SAN storage for application data volumes, and due to its virtualization capabilities and the easy-to-use GUI, Storwize V7000 allows customers to integrate the new storage system very quickly into a high-availability environment. The following important features are included in the Storwize V7000 disk system in a highly-available environment.

• Storage virtualization allows pools of storage to be configured across physically-separate disk systems.

• Thin provisioning efficiently uses only the amount of storage necessary at a given time. • Easy Tier automatically provides efficient use of high performing, more expensive SSD

technology.

Detailed network planning for a PowerHA cluster is a complex, but very important and necessary step towards the success of your high-availability environment.

Almost any application that can be started and stopped with a script can be managed by a PowerHA resource group.

The IBM Storwize V7000 ease of management and enterprise-class features make it an excellent choice to provide shared storage to business-critical applications in mid-range environments that must remain highly available.

IBM Storwize V7000 with IBM PowerHA SystemMirror

32

Resources These websites provide useful references to supplement the information contained in this paper:

• System Storage on IBM PartnerWorld® ibm.com/partnerworld/wps/pub/overview/B8S00

• IBM Systems on IBM PartnerWorld ibm.com/partnerworld/systems/

• IBM Publications Center www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/pbi.cgi?CTY=US

• PowerHA for AIX Cookbook, from IBM Redbooks at: ibm.com/redbooks/pdfs/sg247739.pdf

• Exploiting IBM PowerHA SystemMirror Enterprise Edition, from IBM Redbooks at: ibm.com/redbooks/pdfs/sg247841.pdf

• Introduction to PowerHA ibm.com/developerworks/aix/library/au-powerhaintro/index.html?ca=dgr-lnxw06POWER-HACMPdth-AIX

• Overview of the IBM Storwize V7000, from IBM Redbooks at: ibm.com/redbooks/redpieces/pdfs/sg247938.pdf

• IBM developerWorks® ibm.com/developerWorks

• Power Systems InfoCenter http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/iphat/iphbllparwithhmcp6.htm

About the author Zane Russell is a consultant in IBM Systems and Technology Group ISV Enablement Organization. He has more than 10 years experience working with IBM enterprise storage products.

IBM Storwize V7000 with IBM PowerHA SystemMirror

33

Trademarks and special notices © Copyright IBM Corporation 2011. All rights Reserved.

References in this document to IBM products or services do not imply that IBM intends to make them available in every country.

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml.

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

SET and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC.

DIA Data Integrity Assurance®, Storewiz®, Storwize®, and the Storwize® logo are trademarks or registered trademarks of Storwize, Inc., an IBM Company.

Other company, product, or service names may be trademarks or service marks of others.

Information is provided "AS IS" without warranty of any kind.

All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.

Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products.

All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of the specific Statement of Direction.

IBM Storwize V7000 with IBM PowerHA SystemMirror

34

Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here.

Photographs shown are of engineering prototypes. Changes may be incorporated in production models.

Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.