three scenarios for migrating compute grid 8.0 to ......2017/04/26  · there are three recommended...

36
WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux © 2017, IBM Corporation WP102389 at ibm.com/support/techdocs Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Version Date: April 26, 2017 Douglas MacIntosh and Jeff Dutton IBM Software Group, Application and Integration Middleware Software

Upload: others

Post on 23-Jun-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux

© 2017, IBM Corporation WP102389 at ibm.com/support/techdocs

Three Scenarios for Migrating Compute Grid 8.0 to

WebSphere 9.0 on Linux Version Date: April 26, 2017

Douglas MacIntosh and

Jeff Dutton IBM Software Group, Application and Integration Middleware Software

Page 2: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux

© 2017, IBM Corporation ii WP102389 at ibm.com/support/techdocs

Table of Contents 1 Overview ........................................................................................................................................ 1

1.1 Compute Grid Migration Scenarios ......................................................................................... 1

1.1.1 Scenario 1 – Migration of Existing Nodes ........................................................................ 1

1.1.2 Scenario 2 – Migration through New Node Creation ....................................................... 1

1.1.3 Scenario 3 – Migration through New Cell Creation and Redeployment ........................... 2

1.2 Choosing the Best Scenario for Your Migration ...................................................................... 2

1.3 Starting and Target Builds ...................................................................................................... 3

1.4 The Compute Grid Configuration Used to Test the Migration Process ................................... 3

1.5 Starting Topology.................................................................................................................... 4

1.6 Intermediate Topology ............................................................................................................ 5

1.6.1 Scenario 1 ....................................................................................................................... 5

1.6.2 Scenario 2 ....................................................................................................................... 6

1.6.3 Scenario 3 ....................................................................................................................... 8

1.7 Target Topology.................................................................................................................... 10

1.7.1 Scenario 1 ..................................................................................................................... 10

1.7.2 Scenario 2 ..................................................................................................................... 11

1.7.3 Scenario 3 ..................................................................................................................... 11

1.8 The Migration Process .......................................................................................................... 13

2 Prepare Your Environments for WAS9.0 (All Scenarios) ............................................................. 15

2.1 Install (Upgrade) the Installation Manager ............................................................................ 15

2.2 Create the WAS9.0 Product File System .............................................................................. 15

2.3 Locate the WebSphere Customization Toolbox .................................................................... 16

2.4 Backup the Cell..................................................................................................................... 16

2.5 Rollback Procedure in Case a Migration Issue is Encountered ............................................ 16

3 Migrate the Deployment Manager (Scenarios 1 and 2) ............................................................... 17

3.1 Add STATUS_LISTENER_ADDRESS Port (Optional) ......................................................... 17

3.2 Use the CMT to Migrate the Deployment Manager’s Profile to WAS9.0 ............................... 17

3.3 Required Outage .................................................................................................................. 18

3.4 Start the WAS9.0 Deployment Manager ............................................................................... 18

3.5 Restart the WCG8.0 Node Agents and Compute Grid Servers ............................................ 18

3.6 Mix Mode Exceptions and Errors .......................................................................................... 18

4 Scenario 1 Migration Process ...................................................................................................... 19

4.1 Mixed Mode Scheduler Constraint ........................................................................................ 19

4.2 Migrate the First Node to WAS9.0 ........................................................................................ 20

4.3 Migrate SPI Property Files (if using the PJM) ....................................................................... 20

4.4 Start the WAS9 Scheduler and Verify Batch Operation ........................................................ 21

4.5 Migrate the Remaining Nodes .............................................................................................. 21

5 Scenario 2 Migration Process ...................................................................................................... 22

5.1 Create WAS9.0 Nodes ......................................................................................................... 22

5.2 Create WAS9.0 Scheduler and Endpoint Static Clusters ...................................................... 23

5.3 Create and Configure WAS9.0 JDBC Providers and JDBC Data Sources ........................... 23

5.3.1 Create New JDBC Providers and Data Sources ............................................................ 23

5.3.2 Configure PGC Environment Variables ......................................................................... 25

5.4 Migrate SPI Property Files (if using the PJM) ....................................................................... 26

5.5 Configure the Scheduler ....................................................................................................... 26

5.5.1 Configure the Scheduler Hosted By Attribute ................................................................ 26

5.5.2 Set Security Roles ......................................................................................................... 26

Page 3: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Table of Contents

© 2017, IBM Corporation WP102389 at ibm.com/support/techdocs iii

5.5.3 Verify the Rest of the Scheduler’s Configuration ........................................................... 26

5.6 Configure Java WSGrid to Run on WAS90 Scheduler (if installed) ...................................... 27

5.7 Start the Scheduler and Verify Mixed Mode Batch Operation ............................................... 27

5.7.1 Verify WAS9.0 Scheduler with WCG8.0 Endpoints ....................................................... 27

5.7.2 Verify WAS9.0 Scheduler with WCG8.0 and WAS9.0 Endpoints .................................. 28

5.7.3 Possible EJBConfigurationException and Work Around ................................................ 29

5.8 The Next Step ....................................................................................................................... 29

6 Scenario 3 Migration Process ...................................................................................................... 30

6.1 Database Requirements for the Transition from WCG8.0 to WAS9.0 .................................. 31

6.2 Create the “Test” Database .................................................................................................. 31

6.3 Create the WAS9.0 Target Cell and Verify Cell Operation ................................................... 32

6.4 Make the Transition from the WCG8.0 Cell to the WAS9.0 Cell ........................................... 32

6.4.1 Point the WAS9.0 Cell to the Active Database .............................................................. 32

6.4.2 Update the IP Sprayer ................................................................................................... 32

6.4.3 Verify WAS9.0 Cell Operation ....................................................................................... 32

6.5 Complete the Migration ......................................................................................................... 33

Page 4: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux

© 2017, IBM Corporation 1 P102389 at ibm.com/support/techdocs

1 Overview

1.1 Compute Grid Migration Scenarios

There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere 9.0.x (WAS9.0). Two of the scenarios are actual migrations, while the third scenario involves building a new cell and redeploying the WCG8.0 applications. In this paper, we discuss the migration process for all three scenarios on Linux.

1.1.1 Scenario 1 – Migration of Existing Nodes

The first scenario involves the migration of the Deployment Manager, followed by the sequential migration of existing WCG8.0 nodes. Typically, nodes are migrated one at a time, or a few at a time, depending on migration requirements (e.g., high availability).

After the first node is migrated, the WAS9.0 scheduler is configured and started. Note that all WCG8.0 schedulers must be stopped prior to starting the WAS9.0 scheduler. Also, note that a WCG8.0 scheduler cannot be restarted after the first WAS9.0 scheduler is started. This constraint should not matter since the WAS9.0 scheduler can dispatch batch jobs to both WCG8.0 and WAS9.0 endpoints (mixed mode).

The migration is complete once the last WCG8.0 node is migrated to WAS9.0.

This migration scenario is intended to be done in a relatively short period. Once the operation of the WAS9.0 scheduler and endpoints on the first migrated node is verified, the remaining nodes are migrated as quickly as possible. Although running in mixed mode is supported for Scenario 1, it is not recommended for extended periods of time.

Note that the granularity of the migration is at the node level in Scenario 1. You will observe that in Scenario 2 the granularity of the migration is at the application level.

1.1.2 Scenario 2 – Migration through New Node Creation

The second migration scenario does not involve the migration of the existing WCG8.0 nodes. Instead, new WAS9.0 nodes are created for the WAS9.0 schedulers and endpoints. In this scenario, the Deployment Manager is migrated, new WAS9.0 nodes and clusters are created, the WAS9.0 scheduler is configured, all WCG8.0 schedulers are stopped, and lastly, the WAS9.0 schedulers are started.

Note that the WAS9.0 scheduler can dispatch jobs to both WCG8.0 and WAS9.0 endpoints, which is commonly referred to as mixed mode. Over time, the applications are migrated from the WCG8.0 endpoints to the new WAS9.0 endpoints. Or possibly, a WCG8.0 application is not migrated, but replaced altogether by a new WAS9.0 application.

When the last WCG8.0 application has been migrated, or replaced, the WCG8.0 nodes can be removed from the Cell’s configuration. At this point the migration is considered complete.

While Scenario 1 should be done in a relatively short period, note that Scenario 2 is better suited for situations where the cell will be in mixed mode for an extended period. The reason why Scenario 2 is better suited for mixed mode is because all clusters in the cell are homogeneous. Having clusters

Page 5: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 1 - Overview

© 2017, IBM Corporation Page 2 WP102389 at ibm.com/support/techdocs

where its members are all at the same build level is considered a best practice, while mixed mode clusters is not.

Having distinct WCG8.0 and WAS9.0 clusters allow the applications to be migrated one at a time. Hence the granularity of the Scenario 2 migration is at the application level and not at the node level as in Scenario 1. Being able to migrate applications at different times may be better suited to satisfy migration requirements and provide added flexibility that Scenario 1 does not.

1.1.3 Scenario 3 – Migration through New Cell Creation and Redeployment

The third scenario is not technically a migration, but is a process that achieves the same end result.

In this scenario, a new WAS9.0 cell is constructed with the existing WCG8.0 applications already deployed. The WAS9.0 cell also points to test databases that have the same structure as the current WCG8.0 databases. After the WAS9.0 cell is thoroughly tested, there is a short outage that allows the existing WCG8.0 cell to be replaced by WAS9.0 cell.

During the outage, the WAS9.0 cell has its data sources reconfigured to point to WCG8.0 databases, and internal networking changes are made to route IP traffic away from the old cell and to the new cell.

1.2 Choosing the Best Scenario for Your Migration

Making this decision to choose the best scenario is the most important step of the migration process. Careful consideration needs to put into the decision to make sure you chose the best path for your needs. The table below is a comparison between the three migration scenarios and summary of some factors that may help you make your decision.

Migration Comparison Information

Scenario 1 Scenario 2 Scenario 3

Migrate Existing Nodes Create New Nodes Create New Cell

Granularity of Migration Node level Application level Cell level

Duration of Migration Short period of time. Nodes are migrated as quickly as possible.

Extended period of time. Gives time to migrate WCG80 apps over extended period of time.

The transition from WCG80 to WAS9.0 is done quickly. The creation and testing of the target WAS90 cell done over an extended period of time.

Homogeneous clusters during migration. Yes No Yes

Required outages

1) Migration of DMgr.

2) Migration of the first scheduler node.

1) Migration of DMgr.

2) The configuration of WAS90 scheduler.

1) The reconfiguration of WAS90 data sources and of the IP Sprayer.

Table 1: A Comparison of the Three Migration Scenarios

Page 6: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 1 - Overview

© 2017, IBM Corporation Page 3 WP102389 at ibm.com/support/techdocs

Lastly, do not let the amount of text in the paper dedicated to Scenarios 1 and 2 influence your decision. Scenario 3 in this paper, from a migration perspective, requires very little detail compared to the other two migration scenarios. The amount of coverage in the paper for Scenarios 1 and 2 does not imply Scenario 3 requires far less work. What is not addressed in this paper for Scenario 3 is the effort involved in building the WAS9.0 cell from scratch, deploying the existing WCG8.0 applications, and verifying cell and application operation. This effort is not required for Scenarios 1 and 2.

1.3 Starting and Target Builds

The migration scenarios in this paper were tested using the following starting point and target build levels, plus one WCG8.0.0.5 APAR:

Starting Product File Systems: WAS8.0.0.12 + WCG8.0.0.5 + APAR PI53923

Target Product File System: WAS9.0.0.1

When performing your migration, we strongly recommend you start the migration process using the latest version of WCG8.0.0.x and move to the most current version of WAS9.0.x.

The required WCG8.0.0.5 APAR will be in WCG8.0.0.6 and any subsequent fix packs. Contact Compute Grid L2 Support for required iFixes if you are not migrating from WCG8.0.0.6 (or higher), and to WAS9.0.0.1 (or higher).

1.4 The Compute Grid Configuration Used to Test the Migration Process

The migration scenarios documented in this paper are based on actual migrations that were performed in the lab using a robust Compute Grid configuration. These cells have the following Compute Grid attributes and requirements:

• Java WSGrid is installed.

• Job schedules created in WCG8.0 must migrate directly to WAS9.0 and continue to run without modification.

• Existing WCG8.0 xJCL must work with the WAS9.0 scheduler and endpoints.

• The WAS9.0 scheduler must be able to dispatch jobs to WAS9.0 and WCG8.0 endpoints (mixed mode) for Scenarios 1 and 2. For Scenario 3, we have two homogeneous cells where all servers are at one level or the other. Neither cell runs in mixed mode.

Careful consideration should be given before deciding which Scenario to use when migrating from WCG8.0 to WAS9.0

Page 7: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 1 - Overview

© 2017, IBM Corporation Page 4 WP102389 at ibm.com/support/techdocs

1.5 Starting Topology

The three migration scenarios presented in this paper have the same starting point topology. We performed each scenario using our BOSS9911 test cell. The starting point topology for the BOSS9911 cell is depicted in Figure 1 below.

Figure 1: Starting Cell Topology for all Migration Scenarios

The BOSS9911 cell spans two nodes. There is a scheduler cluster (Scheduler) and two grid endpoints clusters (EndPointA and EndPointB), where each cluster spans both nodes. There are four batch applications deployed across the two endpoint clusters.

The following color conventions will be used with respect to application types found in the topology diagrams throughout the paper.

Figure 2: Application Types

Page 8: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 1 - Overview

© 2017, IBM Corporation Page 5 WP102389 at ibm.com/support/techdocs

In the starting point topology diagram, and are traditional batch applications (compute

intensive or transactional), while and are transactional batch applications that utilize the Parallel Job Manger (PJM) and a Shared Lib SPI.

1.6 Intermediate Topology

There are significant differences between the mixed mode topologies for Scenarios 1 and 2. These differences will be discussed in the first two subsections. For Scenario 3, there is no mixed mode operation, but there is a transition between two homogeneous cells that needs to be understood. This transition mode topology will also be discussed in the last subsection below.

1.6.1 Scenario 1

The mixed mode topology for Scenario 1 is given in Figure 3. This figure depicts the cell after having the Deployment Manager and the first node migrated to WAS9.0. The red “X” over the WCG8.0 scheduler indicates that only the WAS9.0 scheduler is permitted to run while in mixed mode.

Figure 3: Mixed Mode Cell Topology for Migration Scenario 1

Page 9: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 1 - Overview

© 2017, IBM Corporation Page 6 WP102389 at ibm.com/support/techdocs

1.6.2 Scenario 2

The mixed mode topology for Scenario 2 has some flexibility based on the intended use of the cell during the migration process. For example, the mixed mode topology shown in Figure 4 below may be well suited for a cell used in a Test environment. In this mixed mode topology, the existing

applications continue to run on the WCG8.0 clusters, while new applications and are developed and tested in the WAS9.0 nodes.

When the scheduler’s cluster is in mixed mode, only WAS9.0 schedulers are permitted to run. Starting a WCG8.0 scheduler while in mixed mode is an unsupported configuration and the WCG8.0 scheduler will not function correctly.

Page 10: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 1 - Overview

© 2017, IBM Corporation Page 7 WP102389 at ibm.com/support/techdocs

Figure 4: A Mixed Mode Cell Topology for Migration Scenario 2

Figure 5 below depicts a possible Scenario 2 migration of a Production cell. In this case, all WCG8.0 applications are deployed across both the WCG8.0 and WAS9.0 clusters. Furthermore, new WAS9.0 applications have been deployed to WAS9.0 clusters as well. At this point, the migration is almost complete. The next step would be to stop the WCG8.0 clusters and remove the WCG8.0 nodes.

Page 11: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 1 - Overview

© 2017, IBM Corporation Page 8 WP102389 at ibm.com/support/techdocs

Figure 5: Another Mixed Mode Cell Topology for Migration Scenario 2

1.6.3 Scenario 3

Thus far in the paper, we have defined mixed mode to mean a WAS9.0 scheduler can dispatch jobs to both WCG8.0 and WAS9.0 endpoints. This transition state exists in both Scenarios 1 and 2. In Scenario 3 however, we don’t have a mixed mode state. But, we do have an extended transition

Page 12: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 1 - Overview

© 2017, IBM Corporation Page 9 WP102389 at ibm.com/support/techdocs

state that exists between the starting point (i.e., a WCG8.0 cell) and the target state (i.e., a WAS9.0 cell).

This transition state is when the WAS9.0 cell is being built, configured, and tested with respect to the deployment of existing WCG8.0 applications, along with the testing of new WAS9.0 applications. Figure 6 below depicts this transition state.

Figure 6: Transition Topology for Migration Scenario 3

The BOSS9911 Cell-02 is the WCG8.0 cell that needs to be migrated. The Compute Grid database, as well as application databases, resides in the Active Database. The IP Sprayer handles all incoming requests and currently routes them to the WCG8.0 cell. Note that the IP Sprayer is comprised of any component that delivers batch job traffic to the cell such as a hardware device, HTTP server, proxy server, etc.

The BOSS9911 Cell-03 is the WAS9.0 cell that is being built and will ultimately replace the WCG8.0. Note that the WAS9.0 cell that is under development has its own test database and only processes test jobs.

Page 13: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 1 - Overview

© 2017, IBM Corporation Page 10 WP102389 at ibm.com/support/techdocs

1.7 Target Topology

In the first two migration scenarios, the migration is complete when no WCG8.0 nodes exist in the cell. For Scenario 3, the target topology is reached when the WCG8.0 cell is decommissioned.

1.7.1 Scenario 1

The target topology for Scenario 1 is shown in Figure 7 below. Note that the apps that were running at the start of the migration are now deployed on the WAS9.0 clusters. Furthermore, new WAS9.0 apps have been deployed to the WAS9.0 clusters as well.

Figure 7: Target Cell Topology for Migration Scenario 1

Page 14: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 1 - Overview

© 2017, IBM Corporation Page 11 WP102389 at ibm.com/support/techdocs

1.7.2 Scenario 2

The target topology for Scenario 2 is shown in Figure 8 and is nearly identical to that of Scenario 1. The only difference is that Scenario 1 has the original nodes and clusters as in the starting topology, while Scenario 2’s topology has new nodes and clusters.

Figure 8: Target Cell Topology for Migration Scenario 2

1.7.3 Scenario 3

The “near” (almost) target topology for Scenario 3 is shown in Figure 9 below. Note that the IP Sprayer is now directing incoming active job requests to the WAS9.0 cell (BOSS9911 Cell-03). Also note that the WAS9.0 cell is now using the Active Database. The WCG8.0 cell (BOSS9911 Cell-02) can be retired (deleted) once the migration to the WAS9.0 cell is deemed successful. Until that time, the WCG8.0 cell is available to be reactivated in the event a problem arises with the migration to the WAS9.0 cell.

Page 15: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 1 - Overview

© 2017, IBM Corporation Page 12 WP102389 at ibm.com/support/techdocs

Figure 9: Near Target Topology for Migration Scenario 3

The figure below shows the actual target topology for Scenario 3. The migration has been deemed successful, the original WCG80 cell (BOSS9911 Cell-02) has been decommissioned, and the Test database deleted.

Page 16: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 1 - Overview

© 2017, IBM Corporation Page 13 WP102389 at ibm.com/support/techdocs

Figure 10: Target Topology for Migration Scenario 3

1.8 The Migration Process

Regardless of which migration scenario you choose, the steps in Chapter 2, Prepare Your Environments for WAS9.0 (All Scenarios), must be performed.

Migration Scenarios 1 and 2 have significant overlap, and where possible, the overlap is presented in chapters that can be referenced by both scenarios. Where the scenarios differ, there are specific chapters to handle those differences. Chapters 3 through 5 are specific to Scenarios 1 and 2.

Migration Scenario 3 has very little in common with Scenarios 1 and 2, hence the bulk of Scenario 3 is in its own chapter (Chapter 6). The table below maps the flow each migration Scenario through the following chapters.

Page 17: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 1 - Overview

© 2017, IBM Corporation Page 14 WP102389 at ibm.com/support/techdocs

Chapter

Scenario

1 2 3

1 Overview √ √ √

2 Prepare Your Environments for WAS9.0 (All Scenarios) √ √ √

3 Migrate the Deployment Manager (Scenarios 1 and 2) √ √

4 Scenario 1 Migration Process √

5 Scenario 2 Migration Process √

6 Scenario 3 Migration Process √

Table 2: Scenario / Chapter Mappings

Page 18: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux

© 2017, IBM Corporation 15 P102389 at ibm.com/support/techdocs

2 Prepare Your Environments for WAS9.0 (All Scenarios)

Preparation for the migration involves installing (or updating) IBM’s Installation Manager (IM) software on your workstation, then using IM to build the target WAS9.0 product file system. The WAS9.0 product file system contains the WebSphere Customization Toolbox (WCT), which will be used to migrate the WAS8.0/WCG8.0 Deployment Manager and Nodes to WAS9.0.

2.1 Install (Upgrade) the Installation Manager

The “Installation Manager and Packaging Utility download links” can be found at:

http://ibm.com/support/docview.wss?uid=swg27025142

Open the link above and select the version of Installation Manager you want to install or upgrade, then select the “Download document” for that version. The Download document contains a link to the “Installing overview” topic in the IBM Knowledge Center, as well as the link to the download for the desired platform.

This URL and outlined procedure should be used at this time to update IM.

2.2 Create the WAS9.0 Product File System

The IBM Knowledge Center contains documentation on how to build a WAS9.0.x file system. The following three methods can be used:

• Install the product on distributed operating systems using the IM Graphical User Interface.

• Install the product on distributed operating systems using the command line utility.

• Install the product on distributed operating systems using response files.

The details for each of these methods can be found in the Knowledge Center using the following links respectively:

http://ibm.com/support/knowledgecenter/SSAW57_9.0/com.ibm.websphere.installation.nd.doc/ae/tins_installation_dist_gui.html

http://ibm.com/support/knowledgecenter/SSAW57_9.0/com.ibm.websphere.installation.nd.doc/ae/tins_installation_dist_cl.html

http://ibm.com/support/knowledgecenter/SSAW57_9.0/com.ibm.websphere.installation.nd.doc/ae/tins_installation_dist_silent.html

At this time, you should build the WAS9.0 file system. The WAS9.0 file system contains the migration scripts you will be using in subsequent sections. For the purpose of this paper, we will refer to the location of the WAS9.0.0.1 product as <WAS90_Product>.

Page 19: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter Error! Reference source not found. - Error! Reference source not found.

© 2017, IBM Corporation Page 16 WP102389 at ibm.com/support/techdocs

2.3 Locate the WebSphere Customization Toolbox

The WebSphere Customization Toolbox contains two tools; the Configuration Migration Tool (CMT) and the Profile Management Tool (PMT). We will use the Configuration Migration Tool to migrate the WCG8.0 Deployment Manager to WAS9.0 in Chapter 3 for both Scenarios 1 and 2. We will also use the CMT to migrate WCG8.0 Nodes to WAS9.0 in Chapter 4 for Scenario 1. The Profile Management Tool will be used in Chapter 5 when creating new WAS9.0 nodes.

The WCT software can be found under the <WAS90_Product>/bin/ProfileManagement directory. Run either the wct.sh or pmt.sh script to open the WCT, then select the corresponding tab to access the CMT or PMT.

2.4 Backup the Cell

It is important that the cell is completely backed up at this time, so it can be restored if an issue is encountered with the migration process.

It is also highly recommended to back up the cell at various points during the migration process, so those intermediate points can be restored instead of having to restore completely back to the beginning of the migration process. For example, we typically back up the cell after migrating the Deployment Manager and after migrating each node in Scenario 1. For Scenario 2, we backup after migrating the Deployment Manager, after creating the new WAS9.0 nodes, and after configuring the WAS9.0 scheduler.

2.5 Rollback Procedure in Case a Migration Issue is Encountered

The URL below is from the WAS8 InfoCenter and discusses the “Rolling back a WebSphere Application Server, Network Deployment cell”.

http://www.ibm.com/support/knowledgecenter/SS7K4U_9.0/com.ibm.websphere.migration.nd.doc/ae/tmig_rollbackdm.html

The instructions for rolling back the cell in the InfoCenter are good and relatively straightforward. We tested the rollback procedure for both scenarios and did not encounter any issues or problems.

Page 20: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux

© 2017, IBM Corporation 17 P102389 at ibm.com/support/techdocs

3 Migrate the Deployment Manager (Scenarios 1 and 2)

3.1 Add STATUS_LISTENER_ADDRESS Port (Optional)

The STATUS_LISTENER_ADDRESS port was first added to the Deployment Manager’s profile in WebSphere 8.5. This port is used by Job Managers and Deployment Managers for status updates coming from registered nodes. This port is not used by Compute Grid functionality.

The STATUS_LISTENER_ADDRESS port is assigned a default value when it is automatically added to the WAS 9.0 Deployment Manager. The Configuration Migration Tool does not provide the ability to override the default value when creating the migration jobs. If the default value is not acceptable in your environment, you can add the STATUS_LISTENER_ADDRESS port with an appropriate value to the WAS8 Deployment Manger prior to migrating. Or, update the port after you bring up the WAS9.0 Deployment Manager later in this chapter.

3.2 Use the CMT to Migrate the Deployment Manager’s Profile to WAS9.0

The Configuration Migration Tool is used to migrate the Deployment Manager to WAS9.0 on distributed platforms. To access the CMT, open the WebSphere Customization Toolbox and select the CMT tab. The WCT is started by running the <WAS90_Product>/bin/ProfileManagement/wct.sh script.

To initiate the migration of the Deployment Manager, click on the “New” button in the CMT panel. To perform the migration, follow the instructions provided by the migration wizard.

One option during the migration is to “disable the source deployment manager after migration.” This option is shown in Figure 11 below. By default, this option is selected and the WAS8.0 Deployment Manger will be stopped and disabled during the migration process. Disabling the WAS8.0 Deployment Manager means it cannot be started intentionally, or by accident, from this point forward.

Figure 11: Disabling the Deployment Manager After Migration

We recommend disabling the source Deployment Manager when migrating to WAS9.0. However, if you chose not to disable the source Deployment Manager at this time, you will have to manually stop it in Section 3.3.

Page 21: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 3 - Migrate the Deployment Manager (Scenarios 1 and 2)

© 2017, IBM Corporation Page 18 WP102389 at ibm.com/support/techdocs

3.3 Required Outage

Before the WAS9 Deployment Manager is started for the first time, we need to stop all Compute Grid schedulers and endpoint servers, their node agents, and the WAS8.0 Deployment Manager it not already stopped. Note that the current migration process does not support node agents and Compute Grid servers to be running as the DMgr’s configuration is migrated to WAS9.

If the node agents and Compute Grid servers were not stopped prior to starting the WAS9 Deployment Manager for the first time, the servers that were running will no longer be able to dispatch and process new Compute Grid jobs. If this situation occurs, simply cycle the node agents and Compute Grid servers at this time.

3.4 Start the WAS9.0 Deployment Manager

At this point, start the WAS9.0 deployment manger.

3.5 Restart the WCG8.0 Node Agents and Compute Grid Servers

At this point, the WAS9.0 Deployment Manager is up and the WCG8.0 configuration has been restored. Now start the WAS8.0/WCG8.0 node agents, then start the scheduler and endpoint servers. Once the cell is up, you are able to resume using the cell.

3.6 Mix Mode Exceptions and Errors

After migrating the Deployment Manager to WAS9.0, the cell is in mixed mode. The cell is in mixed mode since the node agents, schedulers, and endpoint servers are still at WAS8.0.0/WCG8.0.0. The cell will remain in mixed mode until the last node is migrated to WAS9.0.

While in mixed mode, the WCG8.0.0 Schedulers will have two instances of a java.lang.ClassNotFoundException in their SystemOut.log at startup. These exceptions can be ignored and will go away once the entire cell has been migrated:

• SRVE8052E: Logging ClassNotFoundException java.lang.ClassNotFoundException: class java.lang.ClassNotFoundException: org.eclipse.equinox.servletbridge.BridgeServlet

• SRVE0266E: Error occured while initializing servlets: {0} javax.servlet.UnavailableException: SRVE0200E: Servlet [org.eclipse.equinox.servletbridge.BridgeServlet]: Could not find required class - class java.lang.ClassNotFoundException: org.eclipse.equinox.servletbridge.BridgeServlet

Page 22: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux

© 2017, IBM Corporation 19 P102389 at ibm.com/support/techdocs

4 Scenario 1 Migration Process

Figure 12 below shows the starting point of the Scenario 1 migration process. At this point, the Deployment Manager has been migrated to WAS9.0.0.1 and there are two nodes (AppSrvNode1 and AppSrvNode2) that remain at WCG8.0.0.5/WAS8.0.0.12.

Figure 12: Migrate the First Node to WAS9.0

The AppSrvNode1 node will be migrated first, thus putting the cell into mixed mode with respect to schedulers and endpoints. When in mixed mode, a WAS9.0 scheduler is started on the migrated node (AppSrvNode1) and can dispatch work across WCG8.0 and WAS9.0 endpoints running in the AppSrvNode2 and AppSrvNode1 nodes respectively. While in mixed mode, you can verify WAS9.0 behavior and proceed with the rest of the migration one node at a time. In our cell, we only have two nodes to migrate; hence the AppSrvNode2 node will be migrated next. Once the last node is migrated, the cell is no longer in mixed mode and steps can be taken to remove the remaining WCG8.0 dependencies.

4.1 Mixed Mode Scheduler Constraint

When migrating a Compute Grid cell to WAS9, there is a constraint that does not allow Schedulers to run in mixed mode. Once the cell has been configured for a WAS9 Scheduler, only WAS9 schedulers are permitted to run. The simultaneous running of WAS9 and WAS8/WCG8 schedulers is not supported. If a WAS8/WCG8 scheduler is started by mistake, simply shut it down as soon as possible. Jobs that are submitted to the WAS8/WCG8 scheduler may fail and will not restart.

Page 23: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 4 - Scenario 1 Migration Process

© 2017, IBM Corporation Page 20 WP102389 at ibm.com/support/techdocs

Note that this mixed mode constraint does not apply to Compute Grid endpoints. A WAS9 scheduler can dispatch work to both WAS8/WCG8 and WAS9 endpoints.

4.2 Migrate the First Node to WAS9.0

Before migrating the first node, stop all servers and the node agent running in the node. Next, use the Configuration Migration Tool (CMT) to migrate the node (AppSrvNode1). After the CMT migration process is complete, verify the node agent starts without issue. Do not start any WAS9.0 scheduler or endpoint servers at this time.

For High Availability (HA) environments, special attention must be given to this step. The migration of the first node can become a “single point of failure” if not properly addressed by your migration process.

For example, the topology shown in Figure 12 has an HA vulnerability immediately following the migration of the first node. This problem remains until the second node is migrated. The issue is there is a single WAS9.0 scheduler available to dispatch jobs to the WCG8.0 and WAS9.0 endpoints. If this scheduler fails, the dispatching of jobs will cease until the time the scheduler can be restarted. Remember, it is not possible for WAS9.0 and WCG8.0 schedulers to be up at the same time.

One possible work around is to have two schedulers in the first node. Or, it may be acceptable to take the risk and run in this state if the migration of the second node is done as soon as possible after the first node has been migrated.

4.3 Migrate SPI Property Files (if using the PJM)

This section can be skipped if your cell does not use Compute Grid’s Parallel Job Manager.

Parallel job execution invokes a System Programming Interface (SPI), which is an extension to the execution environment. The SPI is configured using the xd.spi.properties file located in the <WAS8.0:USER_INSTALL_ROOT>/properties directory. In addition to xd.spi.properties file, other property files may be used to configure PJM applications.

For High Availability environments, you will be exposed to a “single point of failure” if there is a single scheduler in the first node.

For production environments, a minimum of two schedulers in the first node may be required.

Stop all servers and the node agent in the node before migrating a node.

Do not start any WAS9.0 scheduler or endpoint servers at this time.

Page 24: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 4 - Scenario 1 Migration Process

© 2017, IBM Corporation Page 21 WP102389 at ibm.com/support/techdocs

When a node is migrated, the SPI property files must be manually copied to the node’s target file system (<WAS9.0:USER_INSTALL_ROOT>/properties). The migration process does not automatically copy the SPI property files.

4.4 Start the WAS9 Scheduler and Verify Batch Operation

Start the WAS9.0 scheduler and verify its operation. Verification is easier if the WCG8.0 endpoints are started and tested while the WAS9.0 endpoints remain down. Once the WCG8.0 endpoints are verified, start the WAS9.0 endpoints and verify their operation. And lastly, verify the operation with both WCG8.0 and WAS9 endpoints running.

4.5 Migrate the Remaining Nodes

The remaining nodes can be migrated one at a time, or several at once depending on migration requirements and high availability considerations. Each node being migrated will need to have the following steps performed:

• Migrate the node to WAS9.0 (section 4.2).

• Migrate SPI Property Files if using the PJM (section 4.3)

• Verify the operation of the WAS9.0 scheduler and end point on the migrated node.

Recall that when the first scheduler node was migrated, there was a required step to restore the scheduler’s configuration. That restoration was performed using cluster scope, hence it does not need to be repeated for each successive node migration.

Once the last node is migrated, the migration to WAS9.0 is complete.

Page 25: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux

© 2017, IBM Corporation 22 P102389 at ibm.com/support/techdocs

5 Scenario 2 Migration Process

In this scenario, new WAS9.0 nodes and clusters are created in addition to the existing WCG8.0 nodes and clusters.

5.1 Create WAS9.0 Nodes

Figure 13 below shows the two WAS9.0 nodes that need to be created for our cell, along with the corresponding scheduler and endpoint clusters.

Figure 13: The new WAS9.0 Nodes and Clusters

The two new WAS9.0 nodes were built using the Profile Management Tool (PMT). This tool is found under the <WAS90_Product>/bin/ProfileManagement directory and can be started by running the pmt.sh script.

At this point, the BOSS9911 cell has the following nodes:

Figure 14: WCG8.0 and WAS9.0 Nodes

Page 26: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 5 - Scenario 2 Migration Process

© 2017, IBM Corporation Page 23 WP102389 at ibm.com/support/techdocs

5.2 Create WAS9.0 Scheduler and Endpoint Static Clusters

The next step is to create three WAS9.0 clusters spanning both the new nodes created in the last section; one cluster for the WAS9.0 scheduler (Scheduler2) and the other two for WAS9.0 endpoints (EndPointA2 and EndPointB2). Each cluster will have one server / node. Details for each cluster are shown in Figure 13.

These clusters can be built using the Administration Console, or via wasadmin.sh scripting. Regardless of which method you choose to create the clusters, make sure the server ports are set appropriately before moving on to the next section.

Figure 15: WAS8.0/WCG8.0 and WAS9.0 Clusters

5.3 Create and Configure WAS9.0 JDBC Providers and JDBC Data Sources

5.3.1 Create New JDBC Providers and Data Sources

In our BOSS9911 cell configuration, the WAS8.0/WCG8.0 scheduler and endpoint data sources were created using an “Oracle JDBC Driver (XA) provider“. With respect to these existing data sources, it is important to note the following points:

After creating the WAS9.0 clusters, check the port assignments for the cluster members and modify appropriately if necessary.

If a server’s node is migrated from WAS8.0/WCG8.0 to WAS9.0, then the migrated Compute Grid server can continue using the same data source it used prior to the migration.

However, if a Compute Grid data source was created in a WAS8.0 environment, it cannot be used by a Compute Grid server residing in a newly created WAS9.0 node. Note that this restriction applies to Compute Grid data sources and not to data sources used by batch applications.

Page 27: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 5 - Scenario 2 Migration Process

© 2017, IBM Corporation Page 24 WP102389 at ibm.com/support/techdocs

For this reason, new data sources (and providers) for the WAS9.0 scheduler and endpoint clusters must be created. The table below summarizes the naming convention information for the existing WAS8.0 data sources and for the new WAS9.0 data sources for our cell.

Cluster WAS

Version Data Source Name JNDI Name Scoping Comment

EndPointA

8.0 pgcDataSource jdbc/pgc Cell=

wc6cell

Created in a WAS8.0 environment and will be used for the life of the WAS8.0 clusters.

EndPointB

Scheduler JobSchedulerDataSource jdbc/lrsched

EndPointA2

9.0

pgcDataSource jdbc/pgc

Cluster= EndPointA2 Created in a WAS9.0

environment and will be used by WAS9.0 clusters.

EndPointB2 Cluster=

EndPointB2

Scheduler2 JobSchedulerDataSource jdbc/lrsched Cluster=

Scheduler2

Table 3: Cell Data Sources

In this table, note all JNDI names for the job scheduler data sources are the same, as are the JNDI names for the end point data sources. The WAS8.0/WCG8.0 servers will use cell scoping when resolving the JNDI name, while the new WAS9.0 servers will use cluster scoping to resolve JNDI names. Cell scope resolution will locate the WAS8.0/WCG8.0 data sources, while cluster level resolution will return WAS9.0 data sources.

To avoid this confusion, we renamed all existing providers to include a WAS8.0 suffix. We then used the default provider names for the WAS9.0 JDBC providers. We chose to rename the WAS8.0 providers since they will be removed from the configuration once all applications are migrated to a WAS9.0 endpoint.

New WAS9.0 JDBC providers must be created before creating WAS9.0 data sources.

Note that the default provider name for “Oracle JDBC Driver” is the same in WAS9.0 as they are in WAS8.0.

To avoid confusion, make sure existing WAS8.0 provider names are distinct from the new WAS9.0 provider names. Otherwise, it will be difficult to distinguish between them when referencing them in the Administration Console.

Page 28: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 5 - Scenario 2 Migration Process

© 2017, IBM Corporation Page 25 WP102389 at ibm.com/support/techdocs

JDBC Provider Name Comment

Oracle JDBC Driver – WAS8.0 Created in WAS8.0 cell and can only be used by WAS8.0 servers. Both providers have cell scope.

Oracle JDBC Driver (XA) – WAS8.0

Oracle JDBC Driver Created in WAS9.0 cell and can only be used by WAS9.0 servers. Both providers have multiple instances, all at cluster scope. Oracle JDBC Driver (XA)

Table 4: Oracle JDBC Driver Naming Conventions.

If your applications that are deployed to the WAS8/WCG8 endpoints also have data sources, and you plan on deploying the apps to the WAS9 endpoints, then you will need to create new WAS9 data sources for these applications in a similar manner as done above for the scheduler and endpoint data sources.

5.3.2 Configure PGC Environment Variables

After creating the WAS9.0 data sources, the next step is to update the WAS9.0 endpoint configurations so they can use the new data sources.

In WAS9.0, the job steps are POJOs (Plain Old Java Object) for both compute-intensive and transactional batch applications. These POJOs are invoked directly and WAS9.0 xJCL identifies them by class name. To support the class name look up, there are two additional environment variables that need to be defined for each WAS9.0 endpoint cluster. We need to create an instance of GRID_ENDPOINT_DATASOURCE with cluster scope. The value of this variable will be the JNDI name of the corresponding data source we just created in the previous section (jdbc/pgc).

We also need to create an instance of GRID_ENDPOINT_DATABASE_SCHEMA with cluster scope and set its value to the schema used in the PGC database. For our cell, the value is SCOTT. For your cell, the value can be obtained from the following page in the WebSphere Integrated Solutions Console:

System administration -> Job Scheduler -> Database schema name

Figure 16 below shows the two pairs of PGC environment variables that have been created for the two WAS9.0 endpoints in our cell.

Figure 16: PGC Environment Variables for WAS9.0 Endpoint Clusters

Page 29: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 5 - Scenario 2 Migration Process

© 2017, IBM Corporation Page 26 WP102389 at ibm.com/support/techdocs

5.4 Migrate SPI Property Files (if using the PJM)

This section can be skipped if your cell does not use Compute Grid’s Parallel Job Manager.

Parallel job execution invokes a System Programming Interface (SPI), which is an extension to the execution environment. The SPI is configured using the xd.spi.properties file located in the < USER_INSTALL_ROOT >/properties directory for a node. In addition to xd.spi.properties file, other property files may be used to configure PJM applications.

At this time, copy the SPI property files that are located under the <WAS Install Directory>/properties directory for the existing WCG8.0 nodes to the newly created WAS9.0 nodes.

5.5 Configure the Scheduler

There are a few steps that need to be taken to configure the WAS9.0 scheduler.

5.5.1 Configure the Scheduler Hosted By Attribute

Configuring the “Scheduler hosted by” attribute involves four steps. Note that you must perform all four steps in the sequence indicated below. If you skip steps a) and b) and only do steps c) and d), the configuration of the scheduler will not work.

a) Using the Administration Console, navigate to the System administration > Job Scheduler panel and set the “Scheduler hosted by” attribute to none.

->

b) Save and synchronize the nodes.

c) Next, set the “Scheduler hosted by” attribute to the WAS9.0 cluster designated for the scheduler. In our cell, it is the JobScheduler2.

->

d) Save and synchronize the nodes.

5.5.2 Set Security Roles

After the “Scheduler hosted by” attribute is configured, the next step is to map users and/or groups to the three Compute Grid roles (i.e., lrmonitor, lrsubmitter, and lradmin). This information is not retained from the previous scheduler configuration. These mappings can be set via the AdminCon as follows:

System administration > Job scheduler > Security role to user/group mapping

5.5.3 Verify the Rest of the Scheduler’s Configuration

After configuring the “Scheduler hosted by” attribute, verify the “Database schema name” and “Data source JNDI name” attributes are correct. It would also be a good idea to make sure the “WebSphere grid endpoints” panel is also correct. If you need to update any of these values, make sure you save and synchronized changes with the nodes.

Page 30: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 5 - Scenario 2 Migration Process

© 2017, IBM Corporation Page 27 WP102389 at ibm.com/support/techdocs

5.6 Configure Java WSGrid to Run on WAS90 Scheduler (if installed)

This section can be skipped if your cell does not have WSGrid java installed.

If WSGrid is installed, it will have to be uninstalled from the WCG8.0 scheduler and reinstalled on the WAS9.0 scheduler. The wsgridConfig.py script is available to assist with configuring WSGrid Java on distributed platforms.

To uninstall WSGrid from the WCG8.0 scheduler and reinstall on the WAS9.0 scheduler, change your directory to the WAS9.0 Deployment Manager’s <USER_INSTALL_ROOT>/bin directory and execute the following scripts:

If your WAS9 scheduler is configured to use a file store, then its location should be distinct from that of the WAS8/WCG8 file store. Otherwise, you could hit issues obtaining an exclusive lock when starting the WAS9 scheduler for the first time.

5.7 Start the Scheduler and Verify Mixed Mode Batch Operation

5.7.1 Verify WAS9.0 Scheduler with WCG8.0 Endpoints

At this point, the WAS9.0 scheduler has been configured and is ready to begin dispatching jobs to the WCG8.0 endpoint cluster. Start the WAS9.0 scheduler and the WCG8.0 endpoints and verify that batch processing is working correctly.

<WAS9.0:USER_INSTALL_ROOT>/bin/wsadmin.sh -lang jython -conntype SOAP -host <host> -port <port> -user <ID> -password <PW> -f <WAS90_Product>/bin/wsgridConfig.py -remove -cluster <WCG80_Scheduler>

<WAS9.0:USER_INSTALL_ROOT>/bin/wsadmin.sh -lang jython -conntype SOAP -host <host> -port <port> -user <ID> -password <PW> -f <WAS90_Product>/bin/wsgridConfig.py -install -cluster <WAS90_Scheduler> -filestoreroot <filestore_location> -providers <providerList>

Page 31: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 5 - Scenario 2 Migration Process

© 2017, IBM Corporation Page 28 WP102389 at ibm.com/support/techdocs

5.7.2 Verify WAS9.0 Scheduler with WCG8.0 and WAS9.0 Endpoints

The WAS9.0 scheduler will also be able to dispatch jobs to a WAS9.0 endpoint once a batch application has been deployed to it. You can deploy an existing application to run on both WCG8.0 and WAS9.0 endpoints, or install a new application to the WAS9.0 endpoints.

The Administrative Console can be used to deploy an existing application to both WCG8.0 and WAS9.0 clusters using the “Applications > Enterprise Applications > [App] > Manage Modules” panel. This is demonstrated in Figure 17, which maps the pjmAppA_EJBs module to both the EndPointA (WCG8.0) and EndPointA2 (WAS9.0) clusters.

Figure 17: pjmApp_A Deployed to a WCG8.0 and WAS9.0 Endpoint Cluster

Note that if an existing application uses a Shared LIB SPI on the WCG8.0 endpoint, and is deployed to a WAS9.0 endpoint, then the WAS9.0 endpoint must be configured to have access to the existing shared library. The endpoint can be configured using the configCGSharedLib.py script found in the product file system. To do so, change your directory to the WAS9.0 Deployment Manager’s <USER_INSTALL_ROOT>/bin directory and execute the following script to create a shared library:

Once an application is verified on the WAS9.0 cluster, it can be removed from the WCG8.0 endpoint.

<WAS9.0:USER_INSTALL_ROOT>/bin/wsadmin.sh -lang jython -conntype SOAP -host <host> -port <port> -user <ID> -password <PW> -f <WAS90_Product>/bin/configCGSharedLib.py -sharedLibraryName <nameOfSharedLib> -sharedLibraryPath <pathOfSharedLIb>

Note that you should not deploy new applications to WCG8.0 endpoints after the Deployment Manager has been upgraded to WAS9.0.

When running in mixed mode, the deployment of applications should only be performed into servers that are running at the latest level of WebSphere.

Page 32: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 5 - Scenario 2 Migration Process

© 2017, IBM Corporation Page 29 WP102389 at ibm.com/support/techdocs

5.7.3 Possible EJBConfigurationException and Work Around

There is a known issue with EJB 3.0 entity beans that could generate the following exception when an application uses “Container Managed Persistence Commit Option A” in a clustered environment.

If you do encounter this issue, setting the following JVM custom property for each cluster member will resolve the problem:

Application servers > [server] > Process definition > Java Virtual Machine > Custom properties > New

com.ibm.websphere.ejbcontainer.wlmAllowOptionAReadOnly=true

Note that we have seen this problem with the XDCGIVT sample application in the Scenario 2 migration process. However, we have not encountered this issue using the Scenario 1 migration process.

5.8 The Next Step

The cell will remain in a mixed mode state until all WCG8.0 batch applications have been either migrated to WAS9.0 clusters or replaced by new WAS9.0 applications. At that time, the migration process can be completed and the cell moved to a homogeneous WAS9.0 cell by removing all WAS8.0/WCG8.0 nodes.

WSVR0068E: Attempt to start EnterpriseBean

..... failed with exception: com.ibm.ejs.container.EJBConfigurationException:

Using Commit Option A with workload managed server is not supported.

Page 33: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux

© 2017, IBM Corporation 30 P102389 at ibm.com/support/techdocs

6 Scenario 3 Migration Process

The term “migration” can be used in many ways. We consider Scenarios 1 and 2 as migrations, since the original cell is still used when the migration process is complete. In this chapter, we discuss the Scenario 3 migration process, but note that in the strictest sense of the word, this is not a migration. Instead, it consists of building a new WAS9.0 cell from scratch and taking the necessary steps to seamlessly replace the original WCG8.0 cell with the new cell.

Note that new migration functionality has been added to WAS9 that supports the “cloning of a cell.” This new functionality can be used to create a WAS9 clone of an existing cell on the same host, or on a different remote host. This cloning functionality can be used to greatly simplify the process of building a new WAS9 cell for this migration scenario.

Figure 18 below has the WCG8.0 starting topology for Scenario 3 to the left of the dashed line in the diagram. The WAS9.0 target topology is to the right of the dashed line. These are two distinct cells that can reside on the same or separate hosts.

Figure 18: Starting and Target Topologies for Migration Scenario 3

In this chapter, we document how to make the transition from the WCG8.0 cell on the left to the WAS9.0 cell on the right as seamlessly as possible.

Page 34: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 6 - Scenario 3 Migration Process

© 2017, IBM Corporation Page 31 WP102389 at ibm.com/support/techdocs

6.1 Database Requirements for the Transition from WCG8.0 to WAS9.0

The starting and target topologies shown in Figure 18 above refer to the same Active database. From the perspective of database consistency, it is very important that the source cell is upgraded to the most recent WCG8.0 fix pack level before performing the Scenario 3 migration. Furthermore, it is equally important that the target WAS9.0 Cell is also at the most recent WAS9.0 fix pack level to ensure database compatibility. If this is not done, then you run the risk of having an incompatible database between your WCG8.0 starting point and your WAS9.0 target.

6.2 Create the “Test” Database

Before the Scenario 3 migration process is complete, there is a transition period that requires the existence two databases; the Active database and a Test database. The Active database is the one used by the WAS8.0/WCG8.0 cell prior to performing the Scenario 3 migration. The Test database is used by the WAS9.0 target cell while it is being built. Figure below shows this transition topology and the role of both databases.

Figure 19: Transition Topology for Migration Scenario 3

The Test database should be built using the DDL provided in the WAS9.0 product as the minimum requirements. Your database administrator will most likely adapt this DDL to meet in-house

Page 35: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 6 - Scenario 3 Migration Process

© 2017, IBM Corporation Page 32 WP102389 at ibm.com/support/techdocs

requirements. The resulting DDL should be compared with the DDL used to build the Active database to make sure they are consistent (compatible).

6.3 Create the WAS9.0 Target Cell and Verify Cell Operation

The WAS9.0 target environment needs to be constructed in a manner that will allow it to replace the WCG8.0 environment. The cell does not have to be identical, but does need to be compatible. Note that this is an opportunity to alter your WCG8.0 topology based on lessons learned with the current cell.

The initial Compute Grid configuration will use the Test database constructed in the previous section. Using the test database will allow you to test the cell, plus the process that will be followed when making the transition from the WCG8.0 cell to this new WAS9.0 cell.

After the cell is built, the WCG8.0 applications are deployed and tested.

6.4 Make the Transition from the WCG8.0 Cell to the WAS9.0 Cell

At this point, it is now possible to make the transition to the WAS9.0 cell. The following subsections describe this process.

6.4.1 Point the WAS9.0 Cell to the Active Database

The Test database is no longer needed. It was used to verify the WAS9.0 cell operation in preparation for the transition from the WCG8.0 cell to the WAS9.0 cell.

At this time, we need to update the WAS9.0 cells data sources so they point to the Active database. Before doing this, stop all WAS9.0 schedulers and endpoints. Only the Deployment Manager and node agents should be up when this step is performed. Update all data sources to now point to the Active database, test the connections via the Administrative Console data source panel, save, and synchronize the nodes. Do not start any WAS9.0 schedulers or endpoints until the WCG8.0 cell has been stopped.

6.4.2 Update the IP Sprayer

Note that the IP Sprayer is comprised of any component that delivers batch job traffic to the cell such as a hardware device, HTTP server, proxy server, etc.

At this point, the WCG8.0 cell should be completely stopped. Next, start the WAS9.0 schedulers and endpoints and verify there are no issues in the server logs. Lastly, have your network team update the IP Sprayer so that it now routes all incoming IP traffic for Compute Grid to the WAS9.0 cell.

6.4.3 Verify WAS9.0 Cell Operation

Verify all cell operation to make sure the transition from the WCG8.0 cell to the WAS9.0 cell was successful.

Page 36: Three Scenarios for Migrating Compute Grid 8.0 to ......2017/04/26  · There are three recommended scenarios for migrating a WebSphere Compute Grid 8.0.0.x (WCG8.0) cell to WebSphere

WP102389d - Three Scenarios for Migrating Compute Grid 8.0 to WebSphere 9.0 on Linux Chapter 6 - Scenario 3 Migration Process

© 2017, IBM Corporation Page 33 WP102389 at ibm.com/support/techdocs

In the event there is a problem, you can reverse the steps in the previous section to transition back to the WCG8.0 cell. In other words, stop the WAS9.0 cell, update the IP Sprayer to point incoming Compute Grid traffic to the WCG8.0 cell, and start the WCG8.0 cell. Once the problem in the WAS9.0 cell has been rectified, repeat the transition back to the WAS9.0 cell.

6.5 Complete the Migration

Once the transition to the WAS9.0 cell is complete and deemed successful, the WCG8.0 cell can be decommissioned and removed from your environment.