sp orchestration suite ee v250 installation userg

170
Orchestration Suite Modeler and Monitor User, Installation and Administration Guide Version 2.5.0 - Enterprise Edition EMAOSM022/01 – July 2010

Upload: jonathan-short

Post on 22-Apr-2015

144 views

Category:

Documents


8 download

TRANSCRIPT

Page 1: SP Orchestration Suite EE v250 Installation UserG

Orchestration Suite Modeler and Monitor User, Installation and Administration Guide Version 2.5.0 - Enterprise Edition

EMAOSM022/01 – July 2010

Page 2: SP Orchestration Suite EE v250 Installation UserG

Orchestration Suite ver. 2.5.0 Modeler and Monitor: User, Installation and Administration Guide Date of issue Reference number Brief description July 2010 EMAOSM022/01 First Edition – ver. 2.5.0

Copyright © 2010 Primeur Ltd. All rights reserved.

No part of this publication may be reproduced, transmitted, transcribed, stored in a retrieval system, or translated into any other language in whole or in part, in any form or by any means, whether it be electronic, mechanical, magnetic, optical, manual or otherwise, without prior written consent of Primeur Ltd.

Primeur Ltd may revise this publication from time to time without notice. A new release of this manual contains changes made to the product since the previous version.

The software product that this manual documents is the exclusive property of Primeur Ltd. The use of this software is governed by the license agreement that accompanies the product. The following conditions must be observed in all cases:

The product may be used only on the number of computers for which the client is licensed.

The client may make only one copy of the product, and this only for backup purposes.

The client may not reverse engineer, decompile, or disassemble the product.

The client may not loan, rent or lease neither the product, nor any of the documentation or user manuals related to the product, whether this is for free or for a fee.

Primeur Ltd warrants that the product will perform substantially in accordance with the accompanying product manual(s). Primeur Ltd disclaims all other warranties either expressed or implied. Primeur Ltd and its suppliers shall not be liable for any damages whatsoever (including damages for loss of business profits, business interruption, loss of business information or other pecuniary loss) arising out of the use of, or inability to use, the product.

SPAZIO, SPAZIO MFT/S, SPAZIO Orchestration Suite, SPAZIO FTFI, SPAZIO Messaging & Queuing, SPAZIO M&Q, SPAZIO File Transport, SPAZIO Data Extract, SPAZIO Legacy Interface, SPAZIO Data Secure, SPAZIO DSSP, SPAZIO DSMQ, SPAZIO Data Compress, SPAZIO JMS and THEMA are trademarks of Primeur Ltd. Other brands and their products are trademarks or registered trademarks of their respective holders and should be noted as such.

Company Headquarters Corso Paganini 3 16125 Genova Italy

Tel: +39 010 27811 Fax: +39 010 8684913 Web: www.primeur.com Mail: [email protected]

Local Agent

Page 3: SP Orchestration Suite EE v250 Installation UserG

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 i

Table of Contents

Table of Contents i

About this manual iv Intended audience iv Required knowledge iv What’s new in this release iv

Summary of main features iv Security iv Notification v Job Instantiator v Cutoff Rules can be Browsed vi Heartbeat vi WebUI->Monitor vi Configuration vi Setup vii Discontinued features vii

Chapter 1 Overview 1 1.1 Governance of a data -moving infrastructure 1

Chapter 2 Installation and Configuration 7 2.1 Planning for Installation 7

2.1.1 Checking the received package - Enterprise Edition 7 2.1.2 Define the customer adoption level for this specific installation 7 2.1.3 Check the product prerequisites 8 2.1.4 System and Installation requirements: hardware 8

2.2 Installation Process 8 2.2.1 Verify the target environment prerequisites 9 2.2.2 Installed directory tree 9 2.2.3 License 9 2.2.4 Orchestration Suite Manager configuration files 10 2.2.5 DB 10 2.2.6 Correctly setting up the Java Runtime Environment for Orchestration

Suite 17 2.2.7 Correctly setting up a SPAZIO MFT/S Runtime Environment for

Orchestration Suite 17 2.2.8 Correctly setting up a WebSphere MQ Runtime Environment for

Orchestration Suite 17 2.2.9 OS Agent Installation 18 2.2.10 Servlet Engine 18 2.2.11 Installing on Windows 18 2.2.12 Installing on Unix 19 2.2.13 How to access the WebUI 19 2.2.14 Configure the OSManager Spazio Listener 20 2.2.15 Configure OSManager WMQ Listener 22 2.2.16 Setting up automation 24 2.2.17 Configure Orchestration Suite Manager Services 25

2.3 Upgrading from previous versions 26 2.3.1 Prerequisites 27 2.3.2 Upgrading the DB 27 2.3.3 Upgrading the configuration file 30

2.4 DB Maintenance services and tools 34 2.4.1 DB2 Automatic services for Performance analysis 34

Page 4: SP Orchestration Suite EE v250 Installation UserG

Table of Contents

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide ii EMAOSM022/01

2.4.2 DB2 Maintenance 35 2.4.3 Oracle Automatic services for Performance analysis 36 2.4.4 Oracle Maintenance 37

2.5 Installation verification 38 2.6 First Steps 38 2.7 Orchestration Suite Command Line Interface 40 2.8 Uninstalling the product 42

Chapter 3 User Guide 43 3.1 Product features 43

3.1.1 Security 43 3.1.2 MFT Log Production Technology 48 3.1.3 Agent/Manager interaction 50 3.1.4 Activity File and Activity Record Lifecycle 50 3.1.5 Activity Record Correlation 51 3.1.6 Flow Status Evaluation: Middleware 52 3.1.7 Flow Monitoring: Middleware 53 3.1.8 Flow Instance Runtime Governance 54 3.1.9 Composite Flow-oriented Monitoring 55 3.1.10 Activity-oriented Monitoring 57 3.1.11 Repository-oriented Monitoring 57 3.1.12 User defined monitoring filters 57 3.1.13 Database and persistent data management 58 3.1.14 History Data Browsing 58 3.1.15 MFT Topology Definition 59 3.1.16 Flow Discovery 61 3.1.17 Flow Creation: bottom-up 64 3.1.18 Flow SLA Configuration 66 3.1.19 Flow Recognition 68 3.1.20 Flow Status Evaluation: Business 69 3.1.21 Flow SLA Evaluation 69 3.1.22 Flow Status Notification: Business 72 3.1.23 Flow Monitoring: Business 73 3.1.24 Flow Creation: top-down 74 3.1.25 Modeler Entities schema extension, Flow Custom Tagging 75 3.1.26 LogicalArea-oriented Monitoring 76 3.1.27 Flow Creation: Duplication 76 3.1.28 Flow Governance and Lifecycle Management 77 3.1.29 Job Instantiator 78 3.1.30 Flow Governance Notification 82 3.1.31 Heartbeat 92

3.2 Orchestration Suite Services 93 3.2.1 Cleaner 93 3.2.2 Reporting 96 3.2.3 Synchronization and Out-of-sync Management 99 3.2.4 Extension points and integration points 101 3.2.5 Infrastructure overall performance and tuning 102

3.3 Typical usage scenarios 104 3.3.1 Basic Monitoring 105 3.3.2 Discovery 106 3.3.3 Advanced Monitoring 107 3.3.4 Advanced Modeling and Governance 109

3.5 Adoption Process 111 3.6 How-To 112

3.6.1 Monitoring Flow Middleware Status 112 3.6.2 Discovering Running Flows 113 3.6.3 Creating Flows using a bottom-up approach 114 3.6.4 Creating Flows using a top-down approach 115 3.6.5 Monitoring Flow Business Status 116 3.6.6 Flow Governance 117

Page 5: SP Orchestration Suite EE v250 Installation UserG

Table of Contents

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 iii

3.6.7 Working in a security-enabled context 118

Glossary 119

Appendix A Orchestration Suite Software Prerequisites 147

Appendix B Regular expressions for configuring automation 149

Appendix C DB2 Monitoring in Orchsuite 151

Appendix D Notification Topic List 153

Appendix E Step types 155

Appendix F Report Table 157

Page 6: SP Orchestration Suite EE v250 Installation UserG

Table of Contents

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide iv EMAOSM022/01

About this manual

This manual is a guide to understanding, installing, configuring and using the Orchestration Suite.

Intended audience

This manual is intended for:

People responsible for installing Orchestration Suite

System administrators and people responsible for maintenance

Architects, analysts and any other category of users who plan or are in charge of modeling a company’s business flow status and conditions using Orchestration Suite, or who wish to use its potential in order to solve complex or specific cases.

Required knowledge

This manual assumes the reader has a reasonable knowledge of the following concepts:

Readers interested in the functional aspect must be familiar with data moving concepts, and in particular SPAZIO MFT/S. However, in-depth knowledge of this product is not necessary.

Technicians interested in installation and configuration aspects must have detailed knowledge of the operating system and middleware platforms which will host this product.

What’s new in this release

Summary of main features

The following paragraphs list the main features of Orchestration Suite version 2.5.0, classified by areas of interest.

Security

Two security providers are now available out-of-the-box for user authentication:

Basic: authentication is performed on application data stored in the Orchestration DB

LDAP

Page 7: SP Orchestration Suite EE v250 Installation UserG

Table of Contents

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 v

Configuration and management for Security is now performed completely in the WebUI.

In this version security is always active.

For further details refer to section 1.3.31

Notification

From a technology perspective, a brand new component has been integrated, ActiveMQ, open source pub-sub engines; it is used for notification purposes.

The Notification Object Model has been extended, and now includes:

Topics

Providers

Filters

Formatters

Subscriptions

All the configuration and management for these objects is performed via WebUI.

For further details refer to section

Job Instantiator

Once Flows are defined in the Modeler, some of the data specified for the flow can be used to instantiate job templates (JCL, sh or bat) using specific macros.

The following macros are supported: ${flow.name}

${flow.revision}

${flow.la}

${flow.step.i}

${flow.item.i}

${flow.fileName.i}

${flow.repository.i}

${flow.endpoint.i}

${flow.location.i}

${flow.environment.i}

${flow.job.i}

Page 8: SP Orchestration Suite EE v250 Installation UserG

Table of Contents

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide vi EMAOSM022/01

Users can create job templates using these macros, specifying the job template filename for each step in the flow definition, using a specific key, and when the flow is finally activate the instantiated jobs are created in the ${flow.name}/${flow.revision} directory, in the OSMgr box.

This feature uses the notification mechanism, and a specific Job Instantiation Provider is provided.

Known limitation:

Only Linear Flows can be used to instantiate jobs in this release

For further details refer to section 3.1.17 Flow Configuration.

Cutoff Rules can be Browsed

In this version it is possible to display the time-based rules for a specific flow.

For further details refer to section 3.1.17 Flow Configuration.

Heartbeat

OSAgent and Spazio Agent heartbeat records can be:

stored in the DB

browsed from the WebUI

Troubleshooting->Activity data

This use case enables users to browse *all* heartbeat records, optionally filtering by source Agent or a date interval.

Moreover, it enables users to browse just the last heartbeat record received/not received from each agent (either OSAgent or Spazio Agent) within a time interval.

The heartbeat record browsing use case is available to the Administrator and Operator roles.

WebUI->Monitor

In this version you can view the full path of the Logical Areas in WebUI->Monitor>Logical Area.

Configuration

Some configuration settings can now be defined directly in the WebUI:

Event Transport Connectors (Spazio, IBM WMQ)

Activity Listeners

Page 9: SP Orchestration Suite EE v250 Installation UserG

Table of Contents

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 vii

Cleaner

Notification

Setup

OSEngines can be configured to be launched as a service on Windows platforms.

Tomcat can be configured to be launched as a service on Windows platforms.

The Tomcat server.xml file shouldn't be configured any more.

For further details refer to section 2.2.8.1 Installing OSEngines and Tomcat as a Windows Service.

It is no longer necessary to specify the product home.

Discontinued features

Derby DB is no longer embedded.

Page 10: SP Orchestration Suite EE v250 Installation UserG

Table of Contents

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide viii EMAOSM022/01

Page 11: SP Orchestration Suite EE v250 Installation UserG

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 1

Chapter 1 Overview

This chapter is an introductory guide to the Orchestration Suite product and contents.

1.1 Governance of a data -moving infrastructure

The problem that the Orchestration Suite addresses is the governance of a data-moving infrastructure, in terms of modeling and end-to-end monitoring, for the many thousands of data flows performed daily in a modern enterprise information system.

As illustrated in figure 1, an MFT (Managed File Transfer) infrastructure consists of:

data production, consumption, transfer, transformation, routing, encryption, compression, aggregation, splitting

exchanged between several companies, each characterized by its own role inside business processes as a partner, a supplier, a customer

in a deterministic or non-deterministic way, systematically and automatically through schedulers or via manual non-predictive operations

critical to support business transactions and industrial cycles

involving a heterogeneous set of nodes and platforms

laid out according to an architectural blueprint

that must accomplish a large set of management and monitoring SLAs

Figure 1

Page 12: SP Orchestration Suite EE v250 Installation UserG

Overview

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 2 EMAOSM022/01

The complexity of this scenario, and hence the resulting hidden costs depend on:

The heterogeneity of the environment: MFT products from different vendors or open source tools are involved, implementing standard or proprietary technologies, different data moving products and protocols each of which uses its own specific issues and tools

The sheer number of file transfers, which can be several thousands a day, many of which are scheduled-but-never-more-used

Whether these transfers involve external entities (suppliers, customers, partners) or internal counterparts, and how many of these are involved

The help desk office size, and costs, and the SLAs that have to be accomplished

The entire set of batch application assets, that produce and consume bulk data, and are programmed according to strong scheduling requirements.

File transfers deployed in production stages can be seen as services that help consumers and providers to interoperate, as enterprise assets, like software, data or any other enterprise asset, whose lifecycle must of course be managed.

A summary of problems that may arise in a file transfer infrastructure:

How can I discover how many file transfers, for each file transfer protocol, are active in my production cycles and who produces/consumes what and when?

Can I put some of my file transfers out of service, lowering costs (CPU, disk, network); who is responsible for the active ones?

How can I force policies and standards in the file transfer infrastructure, in order to restrict file transfer deploy and scheduling in the production stage, and recognize out-of-standard file transfers?

How can I define time-based rules, on a calendar basis, for the completion of my file transfers, especially for those exchanged with my partners, incoming or outgoing from my company, lowering the risk of paying penalties or improving the efficiency of my system guaranteeing that all data arrives at the right time?

How can I monitor the whole file transfer infrastructure, from an end-to-end perspective, considering flows with errors, flows outside the time-based rules, track the management activities related to a flow, monitoring coarse grained operations involving an unpredictable number of transfers at once?

How can I provide restricted access to file monitoring information, in order to let producers/consumers check by themselves the status of their own file transfers, lowering help desk costs and improving SLAs?

How can I keep history data for a long period of time, for compliance reasons?

Page 13: SP Orchestration Suite EE v250 Installation UserG

Overview

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 3

The Orchestration Suite helps in addressing these kinds of problems, in order to provide for governance of the whole file transfer infrastructure.

Each of the three main Orchestration Suite components can be broken down into sub-components:

OSEngines:

This is a standalone Java Process that coordinates the work of separate threads

Activity Loader: consumes information coming from the MFT infrastructure, in the form of activity files containing XML records, and stores them in the data repository; it can use two listeners as log data input channels:

• Spazio Listener: reads data from Spazio queues

• IBM WMQ Listener: reads data from WMQ queues

Activity Correlation Engine: processes the monitor data stored inside the repository and correlates the various items of information of each flow; lastly, tries to recognize Flow Instances, associating them to their Flow Definitions stored in the modeler and identified via Recognition Criteria; notifies evaluated Flow Instance status to the Notification Engine

Data Flow Correlation Engine: process used to summarize together coarse-grained operation logs and Flow Instances in order to create Composite Flow instances, evaluate their completeness, execution and cut-off status, and give final users a comprehensive view for coarse grained Data Flows, such as sending a group of files using regular expressions or sending a directory and all its content recursively at once

Rule Loader: evaluates the daily time-based SLA plan, based on calendars, generating the daily expectations in terms of cut-offs

Rule Correlation Engine: evaluates time-based SLAs; notifies evaluated Flow Instance SLA status to the Notification Engine

Notification Engine: from all the other components it receives events that have occurred in the Orchestration Suite and passes them to the configured Notification Provider.

Flow Profiler Engine: executed only when the Security feature is enabled; creates bindings between flows and users, evaluating the restricted data access to Flow Instances and enabling profiled access to monitoring data in the Monitor Web UI

Cleaner: can be used to clean the internal state of the Orchestration Suite working queues, particularly relating to Online, History portions of the database; it performs actions like cleaning, archiving, extracting data, and works according to policies It is used to fill the report table too, and contains policies to clean data produced on the file system by the WMQ listener

Page 14: SP Orchestration Suite EE v250 Installation UserG

Overview

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 4 EMAOSM022/01

Orchestration Suite Persistent DB:

This is a DB instance where all the data relevant for the product are stored. A logical classification of these data can be applied:

Modeler data: the data created, consulted by the modeler, enabling file transfer infrastructure governance: Flow Definitions, Steps, Location, Environment, Logical Area, Item, Files, Users, Roles, Applications

Monitor data: consists of the information collected from the monitored nodes: Activities, Flow Instances, Flow Instance Governance Notes, Composite Flow Instances

History Data: according to data management, cleaning and archiving policies, contains history data useful for long term auditing issues

Time-rule data: the data regarding daily cut-offs and warnings to be evaluated

Flow Discovery data: the data used by the discovery functionality, like RecognitionCriteria

Notification data: in some situations notification requires robustness as a quality of service

Calendar definitions

Report data, implemented through an open and supported schema table

Orchestration Suite Web User Interface:

A single Web User Interface is deployed inside a J2EE container. Although this is only one web module, it can be logically divided into three major sub-components:

Modeler: offers user access to all the use cases regarding the modeling of the infrastructure, flows and time based rules; it helps in creating Flow Definitions

Monitor: groups all the monitoring functionalities; it helps in providing a broad perspective of what is currently taking place inside your file transfer infrastructure

Flow Discovery: set of functionalities to analyze log data and discover each of the protocols used, all the file transfers that are actually executed, and to extrapolate information to be used inside the modeler; it helps in creating new Flow Definitions on the basis of data already present inside the monitoring repository.

Report: there are to use cases for generating reports from the WebUI

Configuration: allows the following data to be configured

• Security Provider

• Event Transport Connector

• Event Listener

• RDBMS

Page 15: SP Orchestration Suite EE v250 Installation UserG

Overview

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 5

• Calendar

• Cleaner

• Notification Subscription

• Notification Provider

• Notification Formatter

• Notification Filter

• Report

Administration:

• Cutoff

• Syncpoint

• Auditing

• Statistics

Troubleshooting

Page 16: SP Orchestration Suite EE v250 Installation UserG

Overview

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 6 EMAOSM022/01

Page 17: SP Orchestration Suite EE v250 Installation UserG

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 7

Chapter 2 Installation and Configuration

2.1 Planning for Installation

This section contains a suggested sequence of steps to be carefully read in order to gather the Orchestration Suite installation requisites, and define the correct installation and configuration and deployment phases.

2.1.1 Checking the received package - Enterprise Edition

You should have received:

The Orchestration Suite CD

An Orchestration Suite license

Optional: A SPAZIO setup CD for the Orchestration Suite Manager node, and its license, when explicitly requested for installation where the OSManager Spazio Listener is used, rather than the OSManager WMQ Listener

The installable image is a multiplatform bundle, meaning the image will contain all the platform independent components (e.g. .jar files) and platform dependent components (e.g. Windows.exe or z/OS LOAD modules) that are required to cope with supported scenarios.

As a general statement Orchestration Suite can be installed in any new or existing directory and can be run under the authority of any new or existing user.

In the remainder of this manual the Orchestration Suite installation directory will be referred to as <product_home>.

2.1.2 Define the customer adoption level for this specific installation

The Orchestration Suite has several adoption levels, according to the final goal of the deployment:

Basic monitoring – comprehensive and simple deployment solution to monitor data moving activities

Discovery - the ability to discover and identify Flows and protocols

Advanced monitoring – involves discovery and cut-off management

Advanced modeling – complete managed file transfer governance solution.

Page 18: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 8 EMAOSM022/01

According to the defined adoption level, you should check whether the received license is adequate or not:

Monitor only license

Modeler and Monitor license

Number of OSAgents supported

The advanced monitoring and advanced modeling scenario needs the Modeler component included in the license.

2.1.3 Check the product prerequisites

The Orchestration Suite product needs a software stack of middleware products in order to run:

A Java virtual machine

A Tomcat servlet engine

A relational DB management system

One of:

Primeur SPAZIO MFT/S, and its Java access layer

IBM WMQ

2.1.4 System and Installation requirements: hardware

500MB free disk space for the Manager installation

2.5 GB free disk space for the DB installation

RAM: 1 GB

Processor: 1 GHz

2.2 Installation Process

The installation procedure for Orchestration Suite v. 2.5.0 can be of two types:

Installation from scratch

Upgrade from an existing version Orchestration Suite v.2.2.1 is the prerequisite. For further details refer to section 2.3 Upgrading from previous versions.

The following are the instructions for installing and configuring the Orchestration Suite.

Page 19: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 9

2.2.1 Verify the target environment prerequisites

1 Select the target node to host the Orchestration Suite; whichever OS you select, check the software and hardware prerequisites for the Orchestration Suite.

2 Copy the CD content on a temporary local directory according to the target environment deployment standards

2.2.2 Installed directory tree

Once the product has been installed the following directories are created under <product_home>:

bin: this directory contains some product configuration files

bin/runtime: this directory contains product general shell scripts and batch files

config: this directory contains the product configuration files

dat: this directory contains the data and caches managed at runtime by Orchestration Suite

db: this directory contains product scripts for DB installation or upgrade

doc: this directory contains the product documentation and/or pointers to documentation

lib: this directory contains the product jar files

lib-ext: this directory contains 3rd party products runtime jars that come bundled within Orchestration Suite

logs: this directory contains all diagnostic files produced by Orchestration Suite at runtime.

report/reportTemplate: this directory contains the report templates

plugin: this directory contains the Notification Provider SDK

products: this directory contains external products used by Orchestration Suite

samples: this directory contains samples file for notification, job extractor

testdata: this directory contains the sample activity files

tshoot:this directory will be used for troubleshooting

xsl:this directory contains all XSL transformation files used by Orchestration Suite

2.2.3 License

The license must be deployed, in the <product_home>/config directory.

Page 20: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 10 EMAOSM022/01

2.2.4 Orchestration Suite Manager configuration files

The <product_home>/config directory contains the following files:

config.xml: use this file to configure RDBMS access for OSEngines

dbBrowserConfig.xml: use this file to configure the DriverOutputFactoriesCfg for the dbBbrowser utility

OSEngine.properties: use this file to configure the OSEngines (ActivityLoader, Activity Correlation Engine, Cleaner, Data Flow Correlation Engine, Rule Loader, Rule Correlation Engine, NotificationEngineListener, Flow Profiler)

orchsuite.properties: use this file to configure the context.root for the WebUI

preferences.properties: use this file to customize some Orchestration Suite WebUI values

services.properties: use this file to customize some Orchestration Suite functional behavior

middlewareLoader.properties: this file contains the Modeler default entities that are automatically created in the Modeler DB. These are used also during the discovery phase

log4j.properties: configure this file to change the Orchestration Suite logging and tracing level.

2.2.5 DB

The customer must already have a DB2 or Oracle installation up and running in order to install Orchestration Suite Manager.

The following table provides a list of the DB versions supported by the Orchestration Suite 2.5.0:

Page 21: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 11

2.2.5.1 DB2

Prerequisites and DB characteristics

Make sure there is 2.5GB of disk space in order to create the Orchestration DB 2.5.0.

If the DB is on a remote machine, there must be a DB2 Client on the machine where the Orchestration engines are installed.

Important Note

If the DB is on a remote machine you must catalog it.

The product provides the following scripts

<product_home>/bin/setup/DB-remote/db2/db2_catalog(.bat/.sh)

This script must be run once the existing DB has been successfully created or migrated.

In a UNIX environment the user under which the Orchestration engines run (for example the user sporch) must belong to the DB administrators group. The file .profile must include the instruction for execution of db2profile.

Logging

The type of logging used is Circular Logging with the following features:

Log file size 16384

Log Buffer size 1000

Primary log 8

Secondary log 2

List of characteristics of tablespaces and buffer pools allocated

In the Orchsuite DB, six buffer pools are defined with the following characteristics:

IMMEDIATE SIZE 1000

PAGESIZE 4,8,16 K

In the Orchsuite DB 6 tablespaces are defined with the following characteristics:

REGULAR/SYSTEM TEMPORARY

PAGESIZE 4, 8,16 K

Page 22: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 12 EMAOSM022/01

MANAGED BY SYSTEM

EXTENTSIZE 32

PREFETCHSIZE AUTOMATIC

Installation Steps

In order to create the DB and the DB Schema for the Orchestration Suite DB, and to fill the Orchestration Suite tables, please execute the following steps.

1 customize the configuration file

2 run the setup and customization scripts.

The files involved in this process are:

<product_home>/bin/installationParams(.bat/.sh)

<product_home>/bin/setupCmdLine(.bat/.sh)

<product_home>/config/config.xml

Step 1- Customize the configuration file

See the configuration example below in order to configure the configuration file properly.

Suppose your DB has the following data:

Is installed on a Windows environment in C:\Programmi\IBM\SQLLIB

the DB version is 9.1

the IP address is 192.168.7.133

the DB port is 50000

the name of DB is SPOSDB

the username of DB is db2adm

the password of DB is db2adm

the name of DB is SPOSDB

the instance name is DB2 and it is shared

in <product_home>/bin/installationParams(.bat/.sh) you set

SP_OS_RDBMS_TYPE=DB2

SP_OS_RDBMS_VERSION=9.1

SP_OS_RDBMS_HOME=C:\Programmi\IBM\SQLLIB

SP_OS_RDBMS_INSTANCE=DB2

SP_OS_RDBMS_SCHEMA=DB2ADMIN

SP_OS_RDBMS_DBNAME=SPOSDB

SP_OS_RDBMS_DB2_SHARED_INSTANCE=YES

Page 23: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 13

SP_OS_RDBMS_USER= db2adm

SP_OS_RDBMS_PWD= db2adm

SP_OS_RDBMS_HOST_PORT=50000

SP_OS_RDBMS_HOST_ADDRESS=192.168.7.133

SP_OS_RDBMS_TS_ROOT_PATH= C:\DB2\TS\SPOSDB

directory where the tablespaces are created.

SP_OS_RDBMS_DB2_LOG_FILE_PATH= C:\DB2\LOG\SPOSDB

directory where the DB2 log files are created.

Warning:

The tablespace and log directories must already exist.

In <product_home>/bin/setupCmdLine(.bat/.sh) you set

SP_OS_RDBMS_JAVA_DRIVER_URL=

jdbc:db2://192.168.7.133:50000/%SP_OS_RDBMS_DBNAME%

SP_OS_RDBMS_JAVA_DRIVER_HOME=%SP_OS_RDBMS_HOME%

SP_OS_RDBMS_JAVA_DRIVER_FILE=db2jcc.jar

SP_OS_RDBMS_JAVA_DRIVER_LIC_FILE=db2jcc_license_cu.jar

SP_OS_RDBMS_JAVA_DRIVER_CLASS=com.ibm.db2.jcc.DB2Driver

in <product_home>/config/config.xml you set

<JdbcDriver>com.ibm.db2.jcc.DB2Driver</JdbcDriver>

<JdbcUrl>jdbc:db2:// 192.168.7.133:50000/SPOSDB</JdbcUrl>

<User>db2adm</User>

<Password>db2adm</Password>

Step-2: Create the Database and all related objects

Run the following script

<product_home>/bin/setup/DB/install/db2/db2-setupDB(.bat/.sh)

This script creates the DB, buffer pool, tablespaces and DB schema.

Check for errors in the following log

<product_home>/logs/db2-setupDB.log

If everything works correctly, execute the following step

Page 24: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 14 EMAOSM022/01

Step-3: Populate the Orchestration Suite dictionaries

WARNING:

Only execute the following step if you are creating a DB from scratch, but not in the case of migration.

Run the following script:

<product_home>/bin/setup/osmgr/ossetup(.bat/.sh) install

This script fills the Orchestration Suite dictionaries

Check for errors in the following log

<product_home>/logs/setup-dictionary.log

If everything works properly the DB create procedure has been completed successfully.

2.2.5.6 ORACLE

Prerequisites and DB characteristics

Make sure there is 2.5GB of disk space in order to create the Orchestration DB 2.5.0.

If the DB is on a remote machine, there must be an Oracle Client on the machine where the Orchestration engines are installed.

In a UNIX environment the user under which the Orchestration engines run (for example the user sporch) must belong to the DB administrators group.

Logging

N/A

List of tablespace characteristics

In the Orchsuite DB, tablespaces are defined with the following characteristics:

SIZE 20, 100, 400 M

Installation Steps

In order to create the DB and the DB Schema for the Orchestration Suite DB, and to fill the Orchestration Suite tables, please execute the following steps.

1 customize the configuration file

2 run the setup and customization scripts.

Page 25: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 15

The files involved in this process are:

<product_home>/bin/installationParams(.bat/.sh)

<product_home>/bin/setupCmdLine(.bat/.sh)

<product_home>/config/config.xml

Step 1- Customize the configuration file

Suppose your DB has the following data:

Installed on Windows environment in C:\oracle\product\10.2.0\db_1\jdbc\lib

the DB version is 10

the IP address is 192.168.7.133

the DB port is 1521

the name of DB is SPOSDB

in <product_home>/bin/installationParams(.bat/.sh) you set

SP_OS_RDBMS_TYPE=ORACLE

SP_OS_RDBMS_VERSION=10

SP_OS_RDBMS_HOME= C:\oracle\product\10.2.0\db_1\jdbc\lib

SP_OS_RDBMS_SCHEMA=ITT_OWN

SP_OS_RDBMS_DBNAME=SPOSDB

SP_OS_RDBMS_USER= itt_own

SP_OS_RDBMS_PWD= itt_own

SP_OS_RDBMS_HOST_PORT=1521

SP_OS_RDBMS_HOST_ADDRESS=192.168.7.133

SP_OS_RDBMS_ORACLE_HOME=C:\oracle\product\10.0.2\db_1

which must be the same as ORACLE_HOME

SP_OS_RDBMS_TS_ROOT_PATH =H:\oracle\ORADATA

Warning:

The directory

%SP_OS_RDBMS_TS_ROOT_PATH%\% SP_OS_RDBMS_DBNAME%

must exist and is the directory where the tablespaces are created

<product_home>/bin/setupCmdLine(.bat/.sh)

SP_OS_RDBMS_JAVA_DRIVER_HOME=%SP_OS_RDBMS_HOME%

SP_OS_RDBMS_JAVA_DRIVER_FILE=ojdbc14.jar

SP_OS_RDBMS_JAVA_DRIVER_CLASS=oracle.jdbc.driver.OracleDriver

<product_home>/config/config.xml

Page 26: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 16 EMAOSM022/01

<JdbcDriver>oracle.jdbc.driver.OracleDriver</JdbcDriver>

<JdbcUrl> jdbc:oracle:thin:@192.168.7.133:1521:SPOSDB<JdbcUrl>

<User>itt_own</User>

<Password>itt_own</Password>

Step-2: create the Database and all related objects

For this operation there is no script provided in the package.

The Oracle Orchestration Suite DB must be created by the DB administrator using the Oracle system tools (i.e. dbca).

To create the instance you must use the password specified in the variable SP_OS_RDBMS_USER defined in the file

<product_home>/bin/installationParams(.bat/.sh)

Run the following script

<product_home>/bin/setup/DB/install/oracle/oracle-setupDB (.bat/.sh)

This script creates the user (SP_OS_RDBMS_USER), the buffer pool, tablespaces and DB schema

Check for errors in the following log

<product_home>/logs/oracle-setupDB.log

If everything works correctly, execute the following step 3.

Step-3: populate the Orchestration Suite dictionaries

WARNING:

Only execute the following step if you are creating a DB from scratch, but not in the case of migration.

<product_home>/bin/setup/osmgr/ossetup(.bat/.sh) install

This script fills the Orchestration Suite dictionaries

<product_home>/logs/setup-dictionary.log

If everything works properly the DB create procedure has been completed successfully.

Page 27: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 17

2.2.6 Correctly setting up the Java Runtime Environment for Orchestration Suite

Some machines may have several JRE images installed. To make sure that Orchestration Suite will use the correct one you must be aware of the Orchestration Suite runtime JRE detection policy.

The Orchestration Suite bundle includes a JRE (1.5.0) for Windows, Linux, Aix and Sun. You can find it in the directory <product_home>/products/java.

For the other operating systems, the appropriate JRE images have to be downloaded from the proper internet sites and installed in the target node and properly configured.

Orchestration Suite will use the JRE referenced by the environment variable SP_OS_JAVA_HOME

The JRE must be configured in the file

<product_home>/bin/installationParams(.bat/.sh).

Suppose your installation is on Linux: if so, you have to configure the <product_home>/bin/installationParams.sh as follows:

SP_OS_JAVA_HOME=/usr/java/lib

2.2.7 Correctly setting up a SPAZIO MFT/S Runtime Environment for Orchestration Suite

In order to enable Agents/Manager interaction a Spazio MFT/S installation has to be separately installed and configured.

You need to ensure that the SPAZIO environment variable is correctly set and available in the Orchestration Suite environment and that any required SPAZIO MFT/S queue manager ACL settings have been performed.

Please refer to SPAZIO MFT/S for Distributed Platforms Installation and Configuration Guide for more details.

2.2.8 Correctly setting up a WebSphere MQ Runtime Environment for Orchestration Suite

In order for Orchestration Suite to correctly access WebSphere MQ resources you need to ensure that:

WebSphere MQ environment variables pointing to JMS Java classes are correctly set and available in the Orchestration Suite environment.

Please refer to WebSphere MQ Infocenter for more details.

Page 28: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 18 EMAOSM022/01

2.2.9 OS Agent Installation

The OSAgent installable image is bundled with Orchestration Suite CD and takes the form of a compressed tar named as follows:

2.5.0.0-SP-OrchSuiteAgent-multi.tar.gz

You can find it in <product_home>/products/OSAgent

Please refer to Orchestration Suite Agent User, Installation and Configuration Guide for more details.

2.2.10 Servlet Engine

Tomcat subsystem is the servlet engine used for the Orchestration Suite WebUI component. It is bundled in <product_home>/products/tomcat/tomacat413.

Once you have chosen the DB type (DB2/Oracle) you have to put the driver file and its license file in the following directory:

<product_home>/products/tomcat/tomacat413/products/common/lib

2.2.11 Installing on Windows

To install on Windows simply expand the installable image in the <product_home>.

2.2.11.1 Installing OSEngines and Tomcat as a Windows Service

When planning to run Orchestration Suite on an unattended Windows server, it might be convenient to configure Orchestration Suite OSEngines and the Tomcat servlet engine to run as a service.

This can be done by having an administrator user run the following command in the <product_home>/bin/runtime directory:

OSService.bat OSEngines[tomcat] install

For some JRE versions and brands, the command may fail with an error stating that jvm.dll could not be located. If this is the case, a hint can be provided on the command line as an argument via the –jvmdll switch:

OSService.bat OSEngines[tomcat] install - jvmdll “C:\my jre\binext\jvm.dll

To test the installation of the Orchestration Suite service you can run:

OSService.bat OSEngines[tomcat] ivp

Page 29: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 19

Once the Orchestration Suite Engines and/or Tomcat have been installed as a service you can optionally customize the service name (by default it is named PrimeurOrchSuiteEngine for OSEngines and PrimeurOrchSuiteStartTomcat for Tomcat) and other attributes by using the provided service manager GUI:

OSService.bat OSEngines[tomcat] config

You can also monitor Orchestration Suite service status at runtime by running the service manager widget, a system tray application, using this script:

OSService.bat OSEngines[tomcat] monitor

Finally, the Orchestration Suite service can be cleanly removed from Windows Services running:

OSService.bat OSEngines[tomcat] remove

2.2.12 Installing on Unix

To install on Unix simply expand the compressed installable image in <product_home>.

2.2.13 How to access the WebUI

After creating the Orchestration Suite DB, as described in 2.3.0 , the user can access the WebUI by performing the following steps:

in the file <product_home>/config/orchsuite.properties configure the properties: context.root and images.root by inserting the IP address where the Tomcat Servlet Engine is installed

start up the servlet engine using the script <product_home>/bin/runtime/startTomcat(.bat/.sh) or start the service (PrimeurOrchSuiteStartTomcat) if tomcat is installed as a service

from the browser access the URL configured in contex_root in the file orchsuite.properties

The home page displays a login window: type administrator as the User ID without any password.

In this version security is always active.

Page 30: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 20 EMAOSM022/01

Figure 2.1: Login home page

2.2.14 Configure the OSManager Spazio Listener

Once you have selected the Spazio Listener to be used to collect activity records from the OSAgent or the Spazio Agent, this must be configured in order to create the right resources needed by the Listener.

Suppose you installed Spazio with the following data:

SPAZIO home directory is h:\spazio

Node name is ORCHSUITE

Orchestration Suite Queue Manger name is QMGRSPOSR

you have to configure the file <product_home>/bin/installationParams(.bat/.sh) as follows:

SP_OS_SPAZIO_HOME=h:\spazio

SP_OS_SPAZIO_NODE_NAME=ORCHSUITE

SP_OS_SPAZIO_QMGR_NAME=QMGRSPOSR

and you have to access the WebUI and configure the section

Configuration-->EventListener --> Spazio Listener

Configuration-->Event Transport Connector-->Spazio Connection

Page 31: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 21

The following are two screenshots showing Configuration of the Spazio Listener:

Figure 2. 2: Spazio Connection Instance Detail

Figure 2.3: Spazio Listener Instance Detail

Five Queues are to be created for the Orchestration Suite Manager. Their default names and usage are:

SP.OS.INPUT.QUEUE is an input Queue and it is the queue where the Agents must put the Activity files

Page 32: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 22 EMAOSM022/01

SP.OS.DLQ is the dead letter queue, where the ActivityLoader puts the discarded activity files.

SP.OS.TRACEQ is the trace queue used for troubleshooting, this queue will contain all the files that enter the SP.OS.INPUT.QUEUE.

SP.OS.DUPLICATEQ is the queue that will contain all the files discarded by the ActivityLoader because they are duplicates, in other words already present in the DB.

SP.OS.OUTPUT.SERVICE.QUEUE is a service queue used in order to communicate with the OSAgents, in particular its aim is to disable the OSAgents.

These queues can be created using the following script:

<product_home>/bin/setup/spazio/createSpazioQueues(.bat/.sh)

Spazio Listener is the default Listener configured out-of-the-box in the product; no action is needed in order to select it; once the Default has been changed, in order to select Spazio Listener as the active one again, the active value must be "true" in the WebUI Configuration-->EventListener --> Spazio Listener section.

Warning: Spazio Listener stops and does not consume files that arrive with a user class different to SYSP; all files arriving through Spazio must have the SYSP default class associated in order to be correctly read by the Spazio Listener. Check whether all Spazio Remote Queue defined in the target monitored nodes pointing to the OSManager Input Queue have this user class associated.

Warning: Spazio MFT/S Listener and WMQ Listener cannot be used simultaneously in this release; only one at time can be active and load activity files in the RDBMS.

2.2.15 Configure OSManager WMQ Listener

Once you have selected the WMQ Listener to be used to collect activity records from the OSAgent, this must be configured in order to create the right resources needed by the Listener.

You have to configure the file , <product_home>/bin/installationParams(.bat/.sh) as follows:

SP_OS_WMQ_QMGR_NAME=QMGRSPOS

SP_OS_WMQ_INPUT_QNAME=SP.OS.INPUT.QUEUE

The WMQ listener receives the log files in a WMQ queue similarly to the Spazio listener, it produces information on discarding, traces, duplicates on the file system. The directories are:

Page 33: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 23

<product_home>/dat/engines/DLQ

<product_home>/dat/engines/TRACEQ

<product_home>/dat/engines/DUPLICATEDQ

and they can be configured in <product_home>/bin/installationParams(.bat/.sh) file

and you have to access the WebUI and configure the section

Configuration-->EventListener --> WMQ Listener

Configuration-->Event Transport Connector-->WMQ Connection

The following are two screenshots showing Configuration of the WMQ Listener:

Figure 2. 4: WMQ Connection Instance Detail

Page 34: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 24 EMAOSM022/01

Figure 2.5: WMQ Listener Instance Detail

IBM WMQ: in <product_home>/bin/setup/wmq scripts are available in order to create the queues in WMQ that are used by the WMQ Listener

Warning: WMQ Listener is NOT the default Listener configured out-of-the-box in the product; in order to select it, modify the WebUI Configuration-->EventListener section the active value to false for Spazio Listener value and set the active value for WMQ Listener section to true.

2.2.16 Setting up automation

Orchestration Suite 2.5.0 includes a new engine that allows certain activities to be automated.

The activities that can be automated are

DB Maintenance

DB Stats: collection of data for analyzing DB performance

Report generation

Cleaning of Audit tables

Cleaning of DB performance data table

Scheduled activities must be set at start-up of OSEngines; they can be modified from the WebUI.

WARNING

To make the changes effective it is necessary to restart OSEngines by running

<product_home>/bin/runtime/stopOSEngines(.bat/.sh) <product_home>/bin/runtime/startOSEngines(.bat/.sh)

Page 35: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 25

For example, if you want to schedule the generation of "Flow_Summary" reports every day at 10 am, you need to change the value of "schedInfo" in the page for managing reports as described in the following screen

Figure 2.6: Object Execution Instance Detail -Flow Summary

To set the scheduling rules (in schedInfo) please refer to Appendix B (Regular expressions for configuring automation).

The following are a few examples of scheduling expressions:

0 0 23 * * ? every day at 23:00

0 0 10/1 * * ? every day from 10:00 onwards

0 0 23 ? * SUN every Sunday at 23:00

0 0 / 5 * * * ? every 5 minutes starting from minute 0

2.2.17 Configure Orchestration Suite Manager Services

The Orchestration Suite Manager Services are:

1 User oriented Orchestration Suite Manager Services can be configured:

Notification is configured and notify events in a log file; different providers, like the SNMP or the SMTP can be configured, using the usual instructions.

Page 36: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 26 EMAOSM022/01

Reporting: the <product_home>/bin/runtime/reportGenerator(.bat/.sh) command can be used to automatically and periodically generate the reports. Warning: Report templates must be uploaded in order to be used; use the <product_home>/bin/runtime/loadReportTemplate(.bat/.sh) command. Warning: in order to generate the reports you must insert the drivers and the DB License file in the directory <product_home>/products/ birt-runtime-2_2_2/ReportEngine/plugins/ org.eclipse.birt.report.data.oda.jdbc_2.2.2.r22x_v20071206/drivers. The list of available reports can be accessed through <product_home>/bin/runtime/sposcsh(.bat/sh), help(), helpReportManagement(). For further details see section 3.2.2 Reporting.

2 Standard Orchestration Suite Manager Services are already configured:

Cleaner. This service is already configured, according to default policies, to move data to the history DB, clean data from the history DB, and move data daily to the report table. The default values and the policies are described in the section 3.2.1 Cleaner. In such section it is described also how to modify the default configuration.

Security is enabled

Spazio is the default Listener configured.

2.3 Upgrading from previous versions

When upgrading a previously installed OSManager version, the following standard rules and procedures apply:

Make sure that tests were carried out in the pre-production installation for functional compliance and/or performance comparison, before going live

As a best practice, it is highly recommended to have all components stopped after having completing their job before proceeding with an upgrade process

Stop (Spazio MFT/S or WMQ), in order to avoid new activity files arriving in the OSManager node

Let the OSEngines complete their summarizing work, and then stop all OSManager components: OSEngines

Stop the Tomcat servlet engine

Back up your current installation, configuration files and persistent data

Copy the CD content on a new installation dir.

Page 37: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 27

2.3.1 Prerequisites

Orchestration Suite 2.2.1 and Orchestration Suite 2.2.1.IF1 is a prerequisite to upgrade to version 2.5.0.

2.3.2 Upgrading the DB

The DB upgrade process consists of the following phases:

Create the new DB schema 250 version. Please refer to the 2.2.0 DB installation section for details.

Back up the current DB

Migrate the DB data from the previous DB version to the new one

Upgrade DB data dictionaries to version 250.

These steps are DB specific. Below we will describe the upgrade process for DB2 and Oracle.

You need to set aside adequate disk space in order to:

Create a DB backup

download the content of the DB in use

load the new DB data

2.3.2.1 Upgrade the DB2 version

Create the new DB 2.5 as described in 2.2.4.5 Installation Steps.

WARNING

All steps must be executed except for populating the dictionaries in the DB, in other words, you must not run:

<product_home>/bin/setup/osmgr/ossetup "install"

2.3.2.2 Migration Steps

After configuring the variables for the DB creation in the

<product_home>/bin/installationParams(.bat/.sh)

<product_home>/bin/setupCmdLine(.bat/.sh)

<product_home>/config/config.xml

files as described in 2.2.4.5 Installation Steps

you have to configure the variables used for the previous DB version as in the following example.

Page 38: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 28 EMAOSM022/01

Suppose that your previous DB version has these data:

OS221 name of the 221 DB

DB2ADM schema of the 221 DB

in <product_home>/bin/installationParams(.bat/.sh) you have to set

SP_OS_RDBMS_DBNAME_TO_BE_MIGRATED= OS221

SP_OS_RDBMS_SCHEMA_TO_BE_MIGRATED= DB2ADM

The migration procedure for DB2 consists of two phases:

Export phase

During this phase the previous DB data will be exported.

Import phase

During this phase the DB data previously exported will be imported into the new 250 DB.

Export phase

Run the following script

<product_home>/bin/setup/DB/migrate/db2/db2-upgrade-to25.ExportData(.bat/.sh)

This script performs the following:

Backup of the DB 221; a backup file will be created in the SP_OS_DB2_BACKUP_DIR directory

Runstats procedure

Export of the 221 DB data

Check these directories in order to understand the result

SP_OS_RDBMS_DB2_EXPORT_TABLE_DIR

SP_OS_RDBMS_DB2_EXPORT_ROWS_DIR

SP_OS_RDBMS_DB2_EXPORT_QUADRA_DIR

SP_OS_RDBMS_DB2_ERROR_DIR

If everything works correctly, execute the import phase

Import phase

Run the following script

<product_home>/bin/setup/DB/migrate/db2/db2-upgrade-to25.ImportData(.bat/.sh)

Check these directories in order to understand the result of the migration

SP_OS_RDBMS_DB2_EXPORT_TABLE_DIR

Page 39: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 29

SP_OS_RDBMS_DB2_EXPORT_ROWS_DIR

SP_OS_RDBMS_DB2_EXPORT_QUADRA_DIR

SP_OS_RDBMS_DB2_ERROR_DIR

If everything works correctly, execute the update of the DB data phase

Run the following script:

< product_home>/bin/setup/osmgr/ossetup(.bat/.sh) upgrade

Check for errors in the following log

<product_home>/logs/setup-dictionary.log

If everything works correctly, the upgrade DB procedure has been completed successfully.

2.3.2.3 Upgrade the Oracle version

Create the new DB 2.5 as described in 2.2.4.10 Installation Steps.

WARNING

All steps must be execute except for populating the dictionaries of the DB, in other words, you must NOT run <product_home>/bin/setup/osmgr/ossetup "install".

2.3.2.4 Migration Steps

DB Oracle migration consists in creating a DB link to the DB 221 and migrating the data contained in the tables to the new DB (2.5.0. version) tables.

The steps are the following:

In <product_home>/bin/installationParams(.bat/.sh)

you have to configure the variables used for the previous DB as in this example.

Suppose that your previous DB version has these data:

OSDB221 name of the 221 DB

in <product_home>/bin/installationParams(.bat/.sh) you have to set

SP_OS_RDBMS_DBNAME_TO_BE_MIGRATED= OSDB221

Run the following script

<product_home>/bin/setup/DB/migrate/oracle/oracle-upgrade-to25.bat

Page 40: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 30 EMAOSM022/01

Check for errors in the following file:

<product_home>/logs/oracle-migrate_2.5.log

If everything works correctly, execute the next step.

Run the following script

< product_home>/bin/setup/osmgr/ossetup(.bat/.sh) upgrade

Check for errors in the following log

<product_home>/logs/setup-dictionary.log

If everything works correctly, the upgrade DB phase has been completed successfully.

2.3.3 Upgrading the configuration file

The Orchestration Suite 2.5.0 <product_home>/config directory contains less configuration files than the previous version.

Please refer to the Orchestration Suite Manager configuration files section for more details.

The user must backup the <product_home>/config directory of the previous version and deploy the new one.

The properties that were configured in the files no longer contained in the <product_home>/config can be now configured using the WebUI in Configuration as follows:

calendarConfig.xml WebUI->Configuration->Calendar

Figure 2.7: Manage Calendar

cleanerConfig.xml WebUI->Configuration->Cleaner

Page 41: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 31

You need to access the WebUI in order to configure the policies of the Cleaner

Suppose that policy1 relating to complete flow in stances was previously configured in cleanerConfig.xml , with the following data:

<Name>policy1</Name>

<DaysOnLine>2</DaysOnLine>

<DaysHistoric>20</DaysHistoric>

<FinalOperation>E</FinalOperation>

In the current version you need to access WebUI and configure the details in Configuration -> Cleaner->Flow Complete as described in the following screen.

Figure 2.8: Cleaner- Object Flow Complete Instance Detail

The following table describes which Instance Names must be configured in relation to the policies contained in the file cleanerConfig.xml of the previous version.

Page 42: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 32 EMAOSM022/01

The properties relating to notification (which in the previous version were configured in the files configNotificationEngine.xml, configSMTPNotificationProvider.xml, configSNMPNotificationProvider.xml ) must now be configured by accessing the following sections of the WebUI:

Configuration->Notification Subscription

Configuration->Notification Provider

Supposing that a provider of type mail was configured in the previous version in the file configSMTPNotificationProvider.xml with the following data

<Sender>[email protected]</Sender>

<Receiver>[email protected]</Receiver>

<SMTPServer>mail.primeur.com</SMTPServer>

on which notifications of status change of the Flow Instances are to be published as specified in the file configNotificationEngine.xml

<TypeName>FlowMWareStatus</TypeName>

<Status>ACTIVE</Status>

In the current version you need to

create a mail Provider by accessing the WebUI

create a subscription for each topic relating to the status of the flow instances.

The following screen describes the creation of a Mail provider

Figure 2.9: Create Provider Mail Object Instance

Page 43: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 33

As you can see from the screen, in this version is possible to send the Notification to multiple recipients (in the example John Smith and Jane Doe).

The following screen describes the creation of the subscription to the topic Topic_MonitorFlowComplete on the Mail type provider

By subscribing to this topic, whenever the flow instance changes to complete status, an e-mail is sent to the recipients defined in the Mail provider email_provider.

Figure 2.10: Create Notification Subscription Object Instance

Important Note

The notification data model has been changed in this version, as well as the way in which notifications are configured. The configuration files no longer exist and everything is configured and activated using the WebUI. Please refer to section 3.1.0 Flow Governance Notification for further details.

Warning:

The <product_home>/config.xml file no longer contains the section related to Spazio /WMQ Listener. To set the proper values please use WebUI as follows:

Configuration->Event Listener

Configuration->Event Transport Connector

please refer to 2.2.13 Configure OSManager Spazio Listener and to 2.2.14 Configure OSManager WMQ Listener for more details.

Page 44: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 34 EMAOSM022/01

2.4 DB Maintenance services and tools

Orchestration Suite v.2.5 supports the inclusion of Monitoring and Database Maintenance functions with the aim allowing sufficiently accurate checking on the status and performance of the Database.

2.4.1 DB2 Automatic services for Performance analysis

To keep track of database progress and performance, the product provides a utility for collecting data of interest.

This utility is divided into two phases:

DB2 Scheduled Statistics Start

activates switches, it is active and configured to be run every Sunday at 22:45

DB2 Scheduled Statistics Stop

gathers the data of interest and deactivates switches on DB2, it is configured to be scheduled every Sunday at 23:15

In the case of necessity, the user can modify the scheduling by accessing the WebUI in Configuration->RDBMS->DB Statistic Start and changing the value of schedInfo.

Important Note

If the scheduling data are modified it is recommended to maintain a certain amount of separation between the tasks performed. In particular it is recommended to start up DM Maintenance before running DB Statistics for collecting data

Figure 2.11: Object DB2 Scheduled Statistics Start

Page 45: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 35

Figure 2.12: Object DB2 Scheduled Statistics Stop

For exporting the data collected, the product provides the following scripts:

<product_home>/bin/admin/DB/db2/db2_export_mon_tables(.bat/.sh)

After running the script <product_home>/tshoot the extracted data will be available.

For further details please refer to Appendix C (DB2 Monitoring in OrchSuite)

2.4.2 DB2 Maintenance

The product provides automatic mechanisms do reorganize indices periodically, in order to avoid negative impacts on database performance caused by fragmentation of indices and tables.

There are two types of maintenance:

ordinary

a reorg is performed

1 on the tables defined with the clause NOAPPEND and their indices

2 a reorg is performed on the indices only of tables defined with the APPEND clause

3 runstats is performed for all the Orchestration Suite tables and indices.

extraordinary

reorg followed by runstats is performed for tables defined with the APPEND clause.

Ordinary maintenance is scheduled by default every Sunday at 22:30.

To modify the scheduling you need to access WebUI in

Configuration-->RDBMS-->DB Maintenance

and alter the field " schedInfo "

The user can deactivate this service by setting the field "active" to "false".

Page 46: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 36 EMAOSM022/01

The following screen is the screen for scheduling the index reorganization tool:

Figure 2.13: Object DB2 Maintenance Instance Detail

By default extraordinary maintenance is scheduled monthly at 22:15.

To modify the scheduling you need to access WebUI in

Configuration-->RDBMS--> Temporary Table Shrink

and modify the field " schedInfo "

The user can deactivate this service by setting the field "active" to "false".

The following screen is the screen for scheduling extraordinary maintenance.

Figure 2.14: Object DB2 Temporary Table Shrink Instance Detail

2.4.3 Oracle Automatic services for Performance analysis

For collecting statistical data the user is requested to use the tools provided by the Oracle DBMS.

The product provides an interactive script for extracting the data collected.

Page 47: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 37

The following scripts are contained in <product_home>/bin/admin/DB/oracle:

oracle_create_report_Awr(.bat/.sh)

2.4.4 Oracle Maintenance

In order to avoid an excessive fragmentation of the indices, with very negative impacts on performance, the product provides tools periodic reorganization or defragmentation of the indices.

Maintenance activity is scheduled by default every Sunday at 23:30.

To modify the scheduling you need to access WebUI in

Configuration-->RDBMS-->DB Maintenance

and modify the field " schedInfo "

The user can deactivate this service by setting the field "active" to "false".

The following screen is the screen for scheduling the index reorganization tool

Figure 2.15: Object Oracle Maintenance Instance Detail

Page 48: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 38 EMAOSM022/01

2.5 Installation verification

Upon completion of the previous installation steps you are now able to run a script in order to verify the installation.

version(.bat/.sh)

If the installation was successful this command will print out

Figure 2.16: Output of the version script

2.6 First Steps

Once you have completed the installation/upgrade process you can perform the following actions:

1 start all subsystems as detailed in section 3.1

2 load Report Templates inside the RDBMS using

<product_home>/bin/runtime/loadReportTemplate (.bat/.sh)

3 start OSEngines using

<product_home>/bin/runtime/startOSEngines (.bat/.sh)

When using Spazio MFT/S between OSAgent and OSManager, execute:

<product_home>/test/testOSEngines.Loader(.bat/.sh)

When using IBM WMQ between OSAgent and OSManager, execute:

<product_home>/test/testOSEngines.Loader.WMQ

Page 49: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 39

Now all sample files bundled in the directory <product_home>/testdata/activityFiles are uploaded to the Spazio (or WMQ) queue, dequeued by the OSEngines, all the records contained are inserted into the DB, records are summarized, giving rise to Flow Instances, and Composite Flow Instances.

You can now check the inserted data, and the evaluated information, navigating the DB through the WebUI for example, specify:

http://127.0.0.1:8088/OrchestrationSuite in your IE browser

A login window opens and you can gain access by inserting the UserId administrator without any password.

Follow the Monitor.Browse.OnLine link, a window opens, giving you access to the Flow Monitoring view, in its so-called middleware light perspective; you can change the perspective to a more complete and rich middleware full perspective simply by clicking on the top right side of the WebUI.

Use the WebUI to navigate Flow status, protocols, and other properties; you can click on some properties in order to access detailed form and information.

Several other forms are available where you can monitor Activities, Composite Flow Instances, and a Summarized view oriented to Repositories status.

Once navigated the WebUI OnLine portion, you can proceed generating reports open a shell, change dir in <product_home>/bin/runtime/ in order to check the deployed list of reports, use the <product_home>/bin/runtime/sposcsh(.bat/.sh) command line utility; execute helpReportManagement() to have a list available methods and samples

execute the command cleaner in order to upload the report table

Warning: actually this command will manage all DB data according to the policies defined in WebUI Configuration Cleaner section , moving old online data to history portion, or deleting directly from online or history portion At the end of this execution, if you leave the default parameters, you will have an empty OnLine portion, and a full History portion, that you can navigate through the WebUI using WebUI.Browse.History link.

Execute the following commands from <product_home>/bin/runtime/:

reportGenerator (.bat/.sh) "dashboard" "last=100" reportGenerator (.bat/.sh) "file_summary" "last=100" reportGenerator (.bat/.sh) "protocol_summary" "last=100" reportGenerator (.bat/.sh) "repository_summary" "last=100" reportGenerator (.bat/.sh) "top_countfile" "last=100" reportGenerator (.bat/.sh) "top_duration" "last=100" reportGenerator (.bat/.sh) "top_error" "last=100" reportGenerator (.bat/.sh) "top_filesize" "last=100" reportGenerator (.bat/.sh) "top_filetransfer" "last=100" reportGenerator (.bat/.sh) "top_totsize" "last=100" reportGenerator (.bat/.sh) "userMw_summary" "last=100"

in directory <product_home>/export/report/pdf you will find the produced reports in pdf format.

Page 50: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 40 EMAOSM022/01

In some report you will find empty tables; these tables depends in fact on business data that can be inserted into the Modeler, and that you haven't created right now following this procedure.

Navigated the WebUI History portion, uploaded by the Cleaner execution, through the WebUI.Monitor.Browse.History link.

Once finished navigating the product, use the Cleaner utility in order to definitely clean all DB portion, OnLine, History and Report, and prepare the product to access real data coming from the monitored infrastructure.

2.7 Orchestration Suite Command Line Interface

The command line interfaces are located in the directory <product_home>/bin/runtime

startOSEngines(.bat/.sh) starts up the engines

stopOSEngines(.bat/.sh) stops the engines

statusOSEngines(.bat/.sh) generates output on the status of the OSEngines

dumpStatus(.bat/.sh) used for troubleshooting in order to collecting trace files

OSService.bat installs OSEngines and Tomcat as Windows services

Cleaner(.bat/.sh) activates the utility for deleting/cleaning tables

dbBrowser(.bat/.sh) starts up the utility dbbrowser

loadReportTemplate(.bat/.sh) loads the templates for reports

reportGenerator(.bat/.sh) generates the reports

resetTimeRuleEvaluation(.bat/.sh) synchronizes instances with the cutoff rules

startTomcat(.bat/.sh) starts up the servlet engine

stopTomcat.sh stops the servlet engine

sposcsh(.bat/.sh) CLI based utility

Page 51: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 41

spazioClearDeadLetterQueue(.bat/.sh) deletes the content of the queue SP.OS.DLQ

spazioClearDuplicateQueue(.bat/.sh) deletes the content of the queue SP.OS.DUPLICATEQ

spazioClearInputQueue(.bat/.sh) deletes the content of the queue SP.OS.INPUT.QUEUE

spazioClearTraceQueue(.bat/.sh) deletes the content of the queue SP.OS.TRACEQ

spazioQList(.bat/.sh) lists the Spazio queues

spazioReadActivityFilesFromDeadLetterQueue(.bat/.sh) acquires the files contained in the queue SP.OS.DQL for troubleshooting

spazioReadActivityFilesFromDuplicateQueue(.bat/.sh) acquires the files contained in the queue SP.OS.DUPLICATEQ for troubleshooting

spazioReadActivityFilesFromInputQueue(.bat/.sh) acquires the files contained in the queue SP.OS.INPUT.QUEUE for troubleshooting

spazioReadActivityFilesFromTraceQueue(.bat/.sh) acquires the files contained in the queue SP.OS.TRACEQ for troubleshooting

startSpazio.bat starts up SPAZIO

stopspazio(.bat/.sh) stops SPAZIO

version(.bat/.sh) generates the output with the version of Orchestration

Page 52: SP Orchestration Suite EE v250 Installation UserG

Installation and Configuration

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 42 EMAOSM022/01

2.8 Uninstalling the product

In the case of a standard installation, uninstalling the product involves uninstalling the SPAZIO node according to the procedures in the system administration documentation of this product. Afterwards, all that is required is to delete the Orchestration Suite installation directory.

Where installation is on a proprietary DB server (Oracle, DB2) it is necessary to:

Drop the DB following the specific procedures in the user manual

Uninstall the SPAZIO node according to the procedures in the user manual

Delete the Orchestration Suite installation directory.

Page 53: SP Orchestration Suite EE v250 Installation UserG

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 43

Chapter 3 User Guide

This chapter is an introductory guide to the Orchestration Suite product and contents.

3.1 Product features

Orchestration Suite offers a broad set of features aimed at managing data moving in a complex and heterogeneous infrastructure. In the following each one is described.

3.1.1 Security

Two security providers are now available out-of-the-box for user authentication:

Basic Security Provider: authentication is performed on application data stored in the Orchestration DB. This is the default provider

LDAP Security Provider: authentication is based on LDAP.

Authorization is performed based on the data censused in the Orchestration DB. The census of users is carried out using the Create User use case (WebUI->Modeler->Create User), it is necessary to census user and password for authentication.

If using the LDAP Security Provider it is not necessary to specify the password of the users.

Warning: To census users in Orchsuite the engines must be active (by running <product_home>/bin/runtime/startOSEngines(.bat/.sh)).

To configure the type of Provider selected you need to access the WebUI in Configuration-> Security Provider.

The following is the screen for the default configuration of the Basic Security Provider.

Figure 3.1: Basic Security Provider Instance Detail

Page 54: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 44 EMAOSM022/01

For LDAP Security Provider there are two use cases:

1 Authentication with anonymous connection (Direct) to the LDAP server

At login Orchestration Suite connects to the configured LDAP server without providing the connection credentials.

Sends the request for authentication to the LDAP server using the user and password specified in the login screen.

The LDAP server validates the credentials received.

If you wish to con figure an LDAP Security Provider of the type Direct you need to access the WebUI as described in the screen below:

Figure 3.2: Ldap Security Provider Instance Detail Direct Authentication

2 Authentication with non-anonymous connection (Trusted) to the LDAP server

At login Orchestration Suite connects to the configured LDAP server providing the user and the password specified at configuration of the Security Provider.

Sends the request for authentication to the LDAP server using the user and password specified in the login screen.

The LDAP server validates the credentials received.

If you wish to con figure an LDAP Security Provider of the type Trusted you need to access the WebUI as described in the screen below:

Figure 3.3: Ldap Security Provider Instance Detail Trusted Authentication

Page 55: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 45

Note: LDAP was certified with com.sun.jndi.ldap.LdapCtxFactory

Once authentication has been completed correctly, Orchestration Suite accesses its own DB to retrieve the authorizations, in other words, the role of the user.

Orchestration Suite can be configured to operate using an access control mechanism. If needed, authentication on a user-id and password basis is provided. The Security feature is always enabled, security and profiling services are available : the default user accessing the system is then considered as a default Administrator user, with Administrator role the User Id is administrator.

The main purpose of the Security Authorization feature is to restrict user access in a fine-grained way, so that certain classes of users can have access only to use cases and data for which they are responsible. Association to data is based on the concept of Logical Areas and Senders and Receivers.

Using the Modeler features, an administrator will create user entities and will link users to Logical Areas. Flows are linked to Logical Areas too and users are linked to flows in terms of Sender and/or Receiver.

A system of user roles based on four role levels is available: Administrator, Operator, Reader, and Restricted Reader. Capabilities are shown in the following:

An Administrator can manage all aspects of the Orchestration Suite and have access to all kinds of data. An Administrator can create and delete users, update their properties, define and change passwords. The Administrator password is set to an empty password on install. You have to change it according to your security requirements before deploying your application in a production environment.

An Operator can create Flow Definitions and items in the Modeler; can update any other Modeler data structure; has read access to all kinds of Modeler and Monitor data.

A Reader can only have read access to all kinds of data inside the Modeler and Monitor.

A Restricted Reader can have read access to all kinds of data inside the Modeler module, but has the ability to monitor only those flows for which he is responsible.

A single role is assigned to each user; the following tables provide an overview of roles and use case access rights for the Modeler and Monitor modules.

Modeler Administrator Operator Reader Restricted Reader

manage flow R/M/C R/M/C R R

manage item R/M/C R/M/C R R

manage application R/M/C R/M R R

Page 56: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 46 EMAOSM022/01

Modeler Administrator Operator Reader Restricted Reader

manage logical area type R/M/C R/M R R

manage logical area R/M/C R/M R R

manage location R/M/C R/M R R

manage environment R/M/C R/M R R

manage deploy group R/M/C R/M R R

manage company R/M/C R/M R R

manage user R/M/C R/M R R

Table 1 – User roles and capabilities

R, this role has the read data capability

M, this role has the update and manage data capability

C, this role has the create capability

The following table shows in further detail the type of access for user roles pertaining Monitor use cases and data.

Monitor Administrator Operator Reader Restricted Reader

search by flow FA FA FA RA

search by logical area FA FA FA N.A.

search by activity FA FA FA RA

search by queue manager/queue

FA FA FA N.A.

Table 2 – Type of access for user roles

FA, Full Access

RA, Access Restricted only to its own profiled data

N.A., Not Available feature

As summarized in the previous tables, Administrator, Operator and Reader have access to all features and on all use cases (Flow, Activities, Logical Area, Repository/Endpoint and so on), while a Restricted Reader can only access some features (Flow, Activities) and for each of these only data related to that user. A user is considered to be related to some kind of data (i.e. flow, activity, logical area) if he is the sender or the receiver of a flow, if he is bound to that specific logical area, or if a flow belongs to a logical area associated to that specific user.

Page 57: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 47

Users may have logical areas associated in the Modeler; this binding determines which Flow Instances a "Restricted Reader" user can monitor in the Flow pages:

Flow Instances are bound to users when their state is evaluated by the Activity Engine

For recognized Flow Instances (the ones for which a Flow Definition has been recognized through recognition criteria), users acting as senders, receivers or bound to its logical area are used

For all other Flow Instances, all users can have access to monitoring without restrictions

Once a Flow Instance has been bound, each update to its Flow Definition in terms of senders, receivers and/or logical area/user binding will not affect Flow Instance binding

Should a user be deleted, all Flow Instances bound to that user will be removed as well.

Flow profiling

This feature restricts the type of flow that a Restricted Reader can see inside the Monitor web pages: only Flow Instances which belong to the logged Restricted Reader are visible, while flows which are not bounded to anyone are visible to all Restricted Readers.

The way in which Flow Instances are linked to the specific user differs and is related to the recognition status of the flow. Consider the following two cases:

Recognized instances

a recognized Flow Instance is linked to a user using sender and receiver information or through a Logical Area defined inside the flow matching the instance. In order to create a link to Logical Area a specific step during the Create User or Manage User use cases is available.

Composite Flow

Security policies applies to Composite Flow Instances; profiling data is propagated from Flow Instances to their Composite Flow, and when browsing Composites in WebUI.CompositeFlow, only Flow Instances correctly profiled can be seen by a restricted reader user.

Restrictions

After successfully profiling of an instance it is not permitted to profile the same instance again. Please consider moreover that:

if the Orchestration Suite has a monitor only license, then an instance is considered ready to profile after the correlation phase.

Page 58: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 48 EMAOSM022/01

if the Orchestration Suite has a modeler+monitor license then an instance is considered ready to profile after the recognition phase, in order to clearly distinguish between recognized and unrecognized Flow Instances and therefore apply the profiling algorithm consistently.

Note that Flow Instances may be linked to users having a role other than Restricted Reader; this means that a Restricted Reader user role will not be able to see them.

Configuration

In order to disable the flow profiling property edit the flow_profiling property inside <product_home>/config/osengine.properties configuration file. If necessary edit the profiling time interval as well.

flow_profiling = 0 profiling_time = 60

Usage

Access to the Web UI is controlled by a username and password login form.

The most notable aspect of the web application is the possibility to save Filters available on the Monitor Browse dialog (this is possible only for on-line flow browsing; thus it’s not possible to have persistent filters for history browsing). In order to save a Filter, use the Save button that is available in the Preference section after setting the Filter details anywhere inside one of the four Browse areas: Flow, Repository/Endpoint, Activities, Logical Area. The filter will be associated to the current user and it will be loaded automatically when the Browse dialog opens up. Only one Filter at time is active, thus the last one saved will override previously save filters.

Filters are bound to the current logged-in user and each user can manage its own filters.

3.1.2 MFT Log Production Technology

Every FT/MFT system has its own solutions, technologies and media in order to produce log information on the operations carried out.

Some types include:

Flat file: log information is produced on a flat file, in text or also binary format; SDKs are sometimes available for parsing records from proprietary formats, otherwise the use of common APIs to access flat files is sufficient. The log files, of linear or circular type, can have associated management and cleanup policies.

For example, in the case of FTP servers, logs can be produced in the following formats:

Standard: xferlog

Page 59: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 49

Proprietary: IBM AIX FTP server, SPAZIO (incorporates the format of third party vendors such as CA-XCOM, IBM NetView FTP)

Furthermore, the information produced in the log files can be in a standard or proprietary format, structured completely unstructured (verbose, for example), in which case parsing of phrases written in the specific dialect becomes essential. Some examples of logs produced according to different levels of granularity.

Logs produced in verbose and/or thin format: IBM AIX FTP server, SPAZIO APIs

Logs produced in fat format: FTP w3c, FTP xferlog, NetView FTP

In its OSAgents, Orchestration Suite includes components called Emitters, ready to use or easy to customize with rapid implementation of Log Adapters, that provide log file parsing functions, follow up where circular wrap-around policies are used, recognition and filtering of the important log information that they contain, aggregation of thin information into events of the correct grain, production of corresponding records in standardized and standard format expected by the Manager

Exit-based: some FT/MFT systems do not produce persistent log information directly, but offer the possibility of extending the product through ports or exits on which important events are notified.

Orchestration Suite offers the possibility, for these systems, of using exit-based Emitters product and technology specific, capable of fitting in to the host systems in order to capture important events relating to file transfers

in this case the log records are produced already normalized and standardized in the format expected by the Manager

The information commonly captured relates to the following types of operations:

Sending a file TO a target node, or to an asynchronous file queue

Receiving a file FROM a target node, or from a file queue

that are executed by the MFT systems.

These operations may be performed:

Manually, by a user

Automatically, through the use of schedulers or directly by applications using available SDKs.

Various types of information are produced by FT/MFT systems in response to the above mentioned operations and also depend on the role that the system plays in file transfer operations - for example, whether it has an active or passive role.

Page 60: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 50 EMAOSM022/01

It is the task of the Orchestration Suite and its components to normalize this information offering the end user a standard and common view of the operations performed.

Consider the manual of the Orchestration Suite Agent for greater details.

3.1.3 Agent/Manager interaction

The Agents and the Manager converse through the exchange of files. The files, Activity Files, are produced directly by the Agents, and contain Activity Records in standard XML format that the Manager can understand.

The following considerations apply:

according to the type of Agent used, standard or proprietary file transfer protocols are used in order to send the Activity Files to the central node: FTP or Spazio/PR4 to a SPAZIO Mailbox, IBM WMQ to a WMQ Queue Manager, with necessary retry policies necessary in order to ensure their guaranteed delivery

different sending policies can be selected, based on time intervals (send every x minutes), to optimize, for example, the latency of the infrastructure, or based on the size of the file (send every n records), or both

agents periodically send heartbeat records, even in the absence of traffic, in order to communicate to the Manager that they are operating correctly; the OSManager logs heartbeat information in logs/hearbeat file

the agents periodically receive communications from the Manager that determine their behavior; in particular, they receive suicide commands if the total number of Agents operating exceeds the limit set by the Orchestration Suite license.

3.1.4 Activity File and Activity Record Lifecycle

Spazio Listener

Orchestration Suite uses a subsystem based on a reduced version of the components of SPAZIO MFT/S in order to receive and manage the life cycle of the files produced by the various Agents.

Various queues are configured for managing the files:

Input Queue: input queue used to accept the Activity files and heartbeat files sent by the Agents. A heartbeat file contains control information relating to the Agent, and is sent periodically in order to communicate its status especially in the case of low or no traffic.

Command Queue: output queue, used for sending commands to the Agent; command example: switch-off of the Agent because maximum limit of active Agents allowed by the license in use has been reached. As soon as these limits are reached, the in-excess Agents are switched off, in accordance with a "first come first served" policy.

Page 61: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 51

Dead Letter Queue: reject queue, used for storing Activity Files that contains records that fail to pass the consistency and validity checks, or the insertion of which failed; used for troubleshooting activities.

Duplicate Queue: reject queue, used for storing system files, created specifically in order to contain exclusively duplicate records; used for troubleshooting activities. The system considers a record as duplicated when it is identical to a record already processed; the keys used in order to recognize the records as duplicates guarantee that a duplicate record scenario is generated only because of NON functional requirements not being respected.

Trace Queue: used for investigating wrong or doubtful behavior; if activated it contains a copy of all the Activity Files sent by the Agents.

Cleaning of data collected in this queue is assured by Spazio mechanisms, based on user class usage: two user classes, SYSP and OSUC, are used; the OSUC user class is created during the OSManager setup phase; the SYSP is the default user class created while installing Spazio MFT/S.

Warning: it is mandatory that all files sent to the SP.OS.INPUT.QUEUE have the SYSP user class defined, in order for the OSManager Spazio Listener to work correctly.

Activity Records that pass the consistency checks are normalized before being stored in a persistent manner in the database, entering into the various cycles of correlation, summarizing, display, notification, cleanup, etc.

WMQ Listener

The WMQ Listener works similar to the Spazio Listener: it uses a WMQ Queue to collect Activity Files, and several directories on the OSManager installation file system to collect trace data, duplicate date, dead letter data.

Configuration data is specified both in the installationParams script file and in the WebUI->Configuration->WMQ Listener .

Setup scripts are provided in the bin/runtime/setup/wmq dir.

Cleaning of data persisted by WMQ Listener on the file system is performed by the OSManager Cleaner engine; a policy exists in the

WebUI->Configuration->Cleaner->WMQ files.

3.1.5 Activity Record Correlation

The normalized Activity Records that enter the system are correlated in order to reproduce a representation of the operations performed by various MFT systems during the execution of a flow.

Page 62: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 52 EMAOSM022/01

Various mechanisms are supported for end-to-end correlation of the activities of a flow:

based on correlation key: several MFT products provide key in their logs, and guarantee that these keys move on together with the files and are logged by every operation acting on that file; in such a case the Correlation algorithm uses these keys in order to correlate all logs produced by operation acting on files, and to give final users a comprehensive end-to-end view on the whole flow, including its execution/runtime status. For instance, Spazio MFT/S do produce this kind of unique key enabling this end-to-end correlation.

based on heuristics and known patterns: some file transfer systems do not provide in their log information any specific key apart from the usual: Source Node/Destination Node, Source FileName/Destination FileName. In these cases, the correlation algorithm uses the information available in order to correlate the records, using heuristics; for example:

Destination Node and Destination FileName of a put operation

Source Node and Source FileName of a get operation

This algorithm is used for example for the correlation of FTP put and FTP get operations that occur on the same node, on the same file, in order to insert and to retrieve a given file.

3.1.6 Flow Status Evaluation: Middleware

Once the Activity Records are correlated, in order to reconstruct a representation (Flow Instance) of the end-to-end flow, the status of that flow must be evaluated.

A Flow Instance can have various statuses:

Completion status: Completion status: indicates whether the flow is completed or still in processing, for example still in the movement or acquisition phase. Supported values for this status are:

Running

Complete

Middleware status: indicates whether the flow has an Error or its execution is OK; a flow has an error if one of its activities has an error, otherwise it is in OK status. A flow error can be blocking or not, according to the operation of the MFT system. If the error persists, the flow continues to remain in error status; if the error is resolved, by human intervention or automatically by the system, like for example in some systems by configuring a specific number of attempts in order to deal with the transmission errors, the flow will be evaluated with OK status, that is without any further error.

Page 63: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 53

This status is always evaluated, regardless of whether or not a Flow Definition has been created and recognized for this Flow Instance.

The algorithm used to calculate the completion status of a Flow Instance varies according to whether or not the Flow Instance has an associated Flow Definition:

if no Flow Definition is recognized for the Flow Instance, a middleware completion is calculated; only the Activity Record that make up the Flow Instance are used to evaluate the completion of the flow.

Example of an FTP flow:

A Flow Instance consisting only of an FTP put operation will be considered as completed on completion of the operation.

If the flow consists of FTP put + FTP get operations, the completion of the FTP put is sufficient to consider the flow complete: in fact there is no Flow Definition that describes all the operations that must be carried out before considering the Flow as complete.

if a Flow Definition is recognized for the Flow Instance, a business completion is calculated; the completion algorithm takes account of the Steps defined in the Flow Definition in order to calculate the completion of the flow.

Example of an FTP flow:

If the Flow Definition consists of the FTP put + FTP get operations, completion of the FTP get is necessary in order to consider the flow completed.

3.1.7 Flow Monitoring: Middleware

Monitoring features provided by Orchestration Suite make it possible to understand what is currently taking place in the file transfer infrastructure: checking for flow middleware status (in error, ok), flows completeness, access execution steps and details, and view all data that is coming directly from the MFT infrastructure logs.

Two levels of monitoring are available on flows:

Middleware Monitoring: explained in this section; Flow are monitored from a middleware perspective, that is using information that comes directly from the MFT infrastructure.

Business Monitoring: explained later in this chapter; Flow are monitored from a business perspective, that is using information that comes from the recognized Flow Definition, when available.

The main features of Middleware Monitoring are:

monitoring Flow Instances in error or correct middleware status

monitoring Flow Instances in running or complete middleware status

Page 64: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 54 EMAOSM022/01

monitor single Flow Instance activities, depending on the operation type performed in the Flow Instance.

Middleware Monitoring views are delivered through a Web User Interface that provides operators with a self-refreshing control panel which displays status data and Flow execution data classified in Source and Destination sections.

At each refresh, a what's new indication is shown for all records whose status has changed since a previous refresh.

Middleware filters are available, jointly to common filters, in order to let user monitor only specific subset of Flows, according to their Common, Source or Destination properties like: FileName, Repository, Endpoint, job.

Two perspectives on Flow Instances summary are available:

a restricted view, containing exclusively information about status and source/destination file name, displayed on entirely on the screen with no scroll needed, and useful for daily monitoring activities

an extended view, containing all middleware information in addition to that of the restricted view, and useful for having all the Flow Instance data on the screen as a whole, even if scrolling is needed.

It is possible to switch from one perspective to the other, and save the preferred perspective as the default one, having it opened at each Monitor start.

Operators’ capabilities are restricted to flows for which they are responsible.

The view showed at startup as the default view (Middleware or Business) depends on the user role.

It is possible to access a Flow Instance detail, including all its activities, just clicking on any of its properties.

3.1.8 Flow Instance Runtime Governance

When multiple operators work simultaneously for investigation/ troubleshooting on an MFT infrastructure in which hundreds of file transfers are performed every hour, various problems may arise:

Taking charge of the management of a flow

Enrichment of the flows with management notes on the state of progress of the investigation phases visible to other operators or managers.

Orchestration Suite is NOT a trouble ticketing tool; it offers basic functions for annotating flows with notes, and it can be easily integrated the customer's own trouble-ticketing tools.

Page 65: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 55

In particular, it is possible:

To define a set of runtime governance statuses, specific to the customer's context, that enriches the base set provided with the product and consisting of an initial and a final status. The new statuses are created by editing the configuration file <product_home>/config/services.properties during the product deployment phase A status is a label that can be associated to one or more Flow Instances. New statuses can be added over time, and are usable as soon as they are defined.

User defined statuses can be deleted; statuses associated to Flow Instances but deleted will remain in use, although they cannot be associated to new Flow Instances, even in the case of historic archiving. The predefined statuses of the product CANNOT be deleted.

Status transitions: there is no mechanism to be defined/used in the transition from one status to another; the transitions are free, the product does not perform any checks on the progress of statuses since statuses are just descriptive labels.

Each operator can add statuses and descriptions - an append, rather than an overwriting method is used; the creation date of the note is recorded, together with the logged on operator, if security is activated, otherwise ADMINISTRATOR.

Statuses and descriptions are NOT modifiable or deletable once they have been created.

The detailed view of the notes is visible together with the Flow Instance specifications.

A synthetic view of the notes is visible in the summary of the Flow Instances.

Search filters are available to search for Flow Instances in accordance with the governance status: search for all NON-managed flows; or the author: search for all the flows that a particular user is managing.

All the data relating to statuses, descriptions and owners are saved in the history with the Flow Instance.

3.1.9 Composite Flow-oriented Monitoring

The Composite Flow view enables monitoring of coarse grained Data Flow involving a large number of Flow Instances. Some MFT products offer the opportunity to perform more file transfer using a single operation, using for instance regular expressions, or sending directory maybe recursively, like the IBM WMQ FTE products, and provide in each produced log the information needed to correlate all the activities so that the final user can have a comprehensive and single view for the whole Data Flow process.

Page 66: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 56 EMAOSM022/01

Composite Flows are composed by an operation, having (more than) one input Flow Instance (even zero), and/or (more than) one output Flow Instance (even zero) according to the pattern and the granularity underneath the Composite:

IBM WMQ FTE send directory command:

one operation

zero input Flow Instances

n output Flow Instances, one for each file transfer involved by the operation

IBM WMB SPFE mediation, receiving 1 file which is then route to n destination:

one operation

one input Flow Instance

n output Flow Instances, one for each file transfer involved in the route mediation

Correlation between operation and Flow Instances is enabled by keys produced by the target products.

All Flow Instances belonging to a Composite Flow have their own status (completeness and error), evaluated as usual from the information coming from the product. These instances have an external key to the Composite Flow id they belongs to, which is displayed in the WebUI.Monitor.

The Operation belonging to a Composite Flow have its own status (completeness and error), evaluated from the information produced in its corresponding log.

Each Composite Flows has its own status (completeness and error), evaluated using all its component status:

A Composite Flow is complete when its Operation is complete, and all its Flow Instances are evaluated as complete; the number of Flow Instances involved in the Composite is a major information needed to understand whether the whole process is complete, and is usually produced by the target system in the log; a Composite Flow is running until it become complete.

A Composite Flow has an error when one of its components (Operations or Flow Instances) have an error.

In the Composite Flow View a summary for each new Composite Flow instance evaluated is shown, with a unique id generated by the OSManager.

Search filters are available as usual to access and monitor Composite Flows.

Page 67: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 57

3.1.10 Activity-oriented Monitoring

The Monitor WebUI provides the possibility to query and view each individual Activity Record.

Various search filters are available.

It is also possible to search by Activity Records that are not part of any flow, not yet summarized, or by summarized Activity Records that are part of a Flow Instance.

For summarized records it is possible to view the flow to which they belong, and all the other Activity Records that are part of the same Flow Instance.

Examples of individual Activity Records:

FTP flows – FTP Put, FTP Get, FTP/S Put, FTP/S Get

3.1.11 Repository-oriented Monitoring

The Monitor WebUI gives the possibility of performing Repository-based flow monitoring: this means it is possible to analyze all the activities performed in each Repository of an MFT infrastructure, checking metrics such as total flows executed, total completed, total in progress, total with an error; it is possible to distinguish the data in accordance with the Endpoints.

In this way the operator has a direct view of the entire infrastructure and what is happening, and can use this as the basis for carrying out investigation/troubleshooting/performance tuning actions.

The view by Repository/Endpoint is integrated with the view by Flows, making it possible to analyze in detail all flows in error on a given Repository.

This use case is available only on the on-line portion of the DB; it calculates the metrics dynamically on each query, and can cause a certain slow-down as the number of records contained in the on-line portion of the DB increases.

3.1.12 User defined monitoring filters

Typically, the operators controlling an MFT infrastructure are organized to divide up the work according to various criteria:

By repository

By Logical Area

By Flow

By Company

By error type

It is therefore useful to customize the Monitor WebUI and the most often executed query to avoid having to configure it continuously.

Page 68: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 58 EMAOSM022/01

Orchestration Suite provides the possibility of persistently saving the query that each logged on user has created.

It is also possible to configure Monitor WebUI preferences so that the preferred query is performed directly when the application starts up.

This function is only available on the on-line portion of the Monitor WebUI.

3.1.13 Database and persistent data management

The OSManager performs large number of operations on massive persistent data, either in its OSEngines, or in its human oriented access through the WebUI.

In order to have the product working efficiently, respecting the latency, performance and data retention requirements defined in the target environment, the following crucial aspects must be considered:

OnLine/History DB portion management

RDBMS system management

Each one of these two aspects must be managed carefully, in order to avoid critical performance degradation and latency problems in monitoring the MFT infrastructure.

OnLine and History DB portion management

Orchestration Suite is a product built for robustness; all the data collected are managed in a persistent and transactional manner in all the stages of processing that they undergo, from capture by the Agents, to forwarding to the Manager, up to final cleaning when no longer useful.

The data that enter into the system (Activity Records), and the correlation and summary data processed on the basis of these (Flow Instances, Composite Flow Instances), have differing uses, by different actors, and therefore different life cycles:

Monitoring: the operators that oversee the MFT infrastructure are interested in what is happening in the infrastructure, and must therefore monitor the activities carried out with minimum delays, and intervene rapidly for solving the problems.

Auditing: once the flows are completed, they leave the domain of the operators and enter the domain of the various actors interested in understanding what has happened, when, if certain flows have actually taken place on a particular date etc.

3.1.14 History Data Browsing

The data archived in the historical portion of the DB can be navigated using the WebUI.

The look & feel of the History portion of the WebUI is identical to the on-line portion, including middleware filters and views (light and full), business filter and view, and the summary form of the Flow Instances.

Page 69: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 59

Information is available relating to:

Flow Instances and the link with the Flow Definition identified at the moment of summarizing

Activities identified for each flow

Governance Notes associated by the operators to every flow

Restrictions:

Governance Notes the notes cannot be modified once they have been archived in the history portion.

Customized queries the customized query save function is not available in the history portion.

Repository and Logical Area Monitoring is not available on the History portion; it is possible to create and extract reports with the same information.

3.1.15 MFT Topology Definition

So that a flow can be created, regardless of the method used (discovery/bottom-up, modeling/top-down, duplication copy/paste), it is mandatory for the information relating to the topology to be present in the system:

Company, Location, Environment

Repository, Endpoint

Various alternatives exist for taking a census of this information:

Company, Location, Environment

Explicitly, with the use cases available in WebUI.Modeler.Topology

Implicitly, by specifying a single value, considered as the default and used for all the flows censused via Discovery; the default value is specified in the file <product_home>/config/middlewareLoader.properties .

This good-enough approach enables the flows to be discovered and put into production rapidly, and is usable when it is NOT required to completely and correctly census the entire topology, and therefore, for example, the reports on Company, Location, Environment, or Company-oriented business monitoring reports are not required. Warning: This file is used exclusively by the Flow Discovery phase, in order to link the Repositories/Endpoints identified in the Flow Instance subject to the Discovery and to link them to a default Company/Location/Environment. Warning: Do not change the default values in the config/middlewareLoader.properties file before each FlowDiscovery, since this is an absolutely error prone approach that can cause serious data inconsistencies.

Page 70: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 60 EMAOSM022/01

Repository, Endpoint

Explicitly, with the use cases available in the sposcsh CLI, that allows a census of the Repositories/Endpoints in the DB and association of these to Companies/Locations/Environments

Implicitly, during the Flow Discovery phase on the basis of a recognition criterion, using the Repositories/Endpoints found in the Flow Instance that originated the criterion.

Considering the procedures normally used in order to create flows, the following scenarios are identified:

flow created in discovery/bottom-up mode

The creation algorithm for a Flow Definition based on a criterion is able to use Repositories/Endpoints, Source and Destination, found in the Flow Instance

These Repositories/Endpoints are searched for in the DB

If already censused previously, and linked to a Company/Location/Environment, this link is used

• the census took place for a previous discovery that involved the same Repository

• the census took place explicitly, using the sposcsh CLI that allows censussing of Repositories/Endpoints in the DB and association of these to Companies/Locations/Environments

If they have never been censussed

• The default values for Company, Location and Environment specified in the middlewareLoader.properties file are used

• These defaults are proposed to the user in a WebUI form

• The user can change to the link between the Repository and Company, Location, Environment proposed, using values input previously using the use cases in WebUI.Modeler.Topology

• When the flow is saved, the Repository/Endpoint and their link with the selected Company/Location/Environment are also censussed

flow created in modeling/top-down mode

The middlewareLoader.properties file is NOT used

The Company/Location/Environment values censussed in the system by means of the WebUI.Modeler.Topology use cases are used

The values of the Repository/Endpoint censussed explicitly using the sposcsh CLI

Page 71: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 61

3.1.16 Flow Discovery

The creation of a flow can also pass through the recognition of its executions. Consider the following typical example:

An automated file transfer is scheduled for each week day, at 22.30

The name of the sent file is always identical, but it contains a suffix that is a unique code that identifies the day.

By analyzing the monitoring data, in person in charge of the file transfer infrastructure identifies this periodic sequence of recurrences with common characteristics, and after discussion with those in charge of the scheduling and applications, establishes that these are, in effect, all recurrences of a flow.

At this point, the user wishes:

To create a recognition rule, in order to make so that all the weekday recurrences of that flow are linked together

Possibly, to create a Flow Definition implicitly on the basis of the recurrences identified

Make sure that in the next stages of Discovery, the recognized recurrences are excluded, so as to be able to concentrate on those not yet recognized

Make sure that all the recurrences of that flow that have entered the system on a date after the criterion creation date can also be considered as recognized.

The process described is called Flow Discovery, and the Orchestration Suite provides functions that help users in the phases of identification and creation of flows based on analysis of the recurrences captured.

It is a heuristic process, consisting of successive refining phases, and which passes through identification of the naming conventions used over time in the file transfer infrastructure.

If in fact all the file transfers have been deployed according to a single naming convention, the process is deterministic and Discovery phase is simple.

If, on the other hand, different naming conventions have been used, depending on the historical periods, on the integration plans put into in production and on the succession of various managers/planners, it can happen, for example, that:

A portion of the source file name was used for a given period of time

A directory for file transfer to each target application was used for a given project

Page 72: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 62 EMAOSM022/01

At this point, the process is less deterministic, longer, requires potentially continuous phases of criterion creation/verification of overlapping/criteria deletion/creation of new criteria in order to minimize risk of incorrect recognition during execution.

Recognition criteria are rules defined in the Orchestration Suite, implementations of simple naming conventions in production in the MFT infrastructure in question.

Each recognition criterion is therefore used in order to:

Group together all the recurrences of a given flow

Create a Flow Definition implicitly, on the basis of the common information extracted from the various linked recurrences

Make sure that, for each new Flow Instance, if it is a recurrence of the same flow it is linked to the other siblings and recognized as a recurrence of the Flow Definition created.

Obviously, for the system to function correctly, the recognition criteria created must NOT overlap, and therefore are not ambiguous.

A recognition criterion consists of the following fields:

Source: Repository, Endpoint, FileName

Destination: Repository, Endpoint, FileName

FileId

and the following rules can be used on these fields:

Match exactly a specified value

First n char are equal to a specified value

Last n char are equal to a specified value

Char between position x and y are equal to a specified value

To summarize, Orchestration Suite provides functions to:

Search for recurrences through a phase that analyses related instances, using search filters on the instances, simulating aggregations, creating and applying recognition criteria, and saving the recognition criterion once a flow has been identified.

Activate a warning for ambiguous criteria during the Flow Instance recognition/Flow Definition phases (see…). Since this is heuristic process, with a large quantity of data, it is possible that there could be some overlapping between the criteria created.

Optionally, the recognition algorithm may not stopped at the first Flow Definition found for a Flow Instance, but tries to recognize them all, and in the event that it has found more than one Flow Definition that matches, it generates a warning in WebUI Monitor, in relation to that Flow Instance.

Page 73: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 63

Manage both static and dynamic recognition criteria. Static recognition criteria are used in order to link Flow Instances and Flow Definitions during the Flow Discovery and Flow Consolidation phases. Dynamic Recognition criteria are used in order to link Flow Instances and Flow Definitions during the Runtime Monitoring phase. They are different because the flows could be Discovered according to one naming convention and subsequently, after the normalization, be deployed according to another new naming convention.

Versioning of recognition criteria. Both static and dynamic criteria are implicitly versioned; only dynamic criteria linked to active flows are used to recognize Flow Instances. Only the static criteria linked to flows which have not been deleted are used in the consolidation operation.

Warning: when a criterion based on Destination Repository/Endpoint or filename is used, it may happen that the criteria does not match and a flow is not recognized until log activities relative to the Destination arrives on the system. This fact cause actual recognition, cutoff and completion evaluation to be deferred to the arrival on the target Destination node, causing issues and customer deployment requirements that are not been satisfied as needed.

Composite Flow Discovery

All the above applies to Composite Flows too. All Flow Instances evaluated as belonging to the same Composite Flows can be discovered at once.

Once performed a search, and just before creating recognition criteria as usual, all Flow Instances belonging to the same Composite Flow must be searched: this can be accomplished simply clicking on their Composite Flow Id, and then creating the recognition rule as usual. This further phase let the user check and verify that all those Flow Instance effectively belong to the same Composite before committing the recognition rule.

Warning: the Composite Flow Discovery phase in this release is limited to IBM WMQ FTE Composite Flows only. SPFE Flows cannot therefore be discovered, as well as SPFE Flows Definitions cannot be created. Flow Instances belonging to SPFE Flows (as consumed by a mediation, or produced by a mediation) can actually be discovered, as usual Flow Instances can, but the discovered Flow Definition represent only partially the whole SPFE Flow, and this may give raise to issues and troubles when SPFE Flows will finally be supported by the product.

Page 74: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 64 EMAOSM022/01

3.1.17 Flow Creation: bottom-up

The Orchestration Suite offers a very rich data model for Flows, which consists of:

so-called middleware data, specified primarily in the log information produced by MFT systems; some examples:

Source/Destination FileName

Source/Destination Repository/Endpoint

so-called business data, which do not appear in the log data, and can be specified by the end user in order to better describe the role of the flow within the company business processes:

Sender/Receiver Company

Sender/Receiver Application

Logical Area of reference

Structure of the steps of the Flow

Items involved in the Flow

Job used for possible scheduling

Possible SLAs

From each recognition criterion identified by the user it is possible to create a Flow Definition, by means of an implicit process called bottom-up creation that the Orchestration Suite provides.

This feature allows the Flow Definition creation process to be completed from an analysis of the Flow Instances running, and to enrich the middleware information contained in the Flow Instances with business information, thus defining a Flow completely.

The user has therefore the possibility to specify business information in addition to the middleware information, which will on the other hand be obtained directly from the recurrences of Flow Instances analyzed:

Structure of the Flow, in terms of steps

Source/Destination Repository and Endpoint

Source/Destination FileName

Item Name

Recognition Criteria

Furthermore, during the Flow Creation phase, the structure of the flow and its steps are reconstructed from the analysis of the operations that are part of the Flow Instance; for example:

Page 75: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 65

the bottom-up creation of a flow consisting of two correlated FTP put and FTP get operations originates a Flow Definition with two steps, one corresponding to the FTP put operation and the other to the FTP get operation.

As far as the Repository and Endpoint identified in the Flow Instance are concerned, in order for them to be implicitly entered in the DB they must be associated to a company/location/environment. A web form appears if the Source and/or Destination Repository are not yet associated to a company/location/environment; the web form suggests the default association, defined in the middlewareLoader.properties configuration file. The user can change this association by selecting another Company/Location/Environment previously defined in the DB.

When this phase is completed, the Flow creation wizard starts up, in which certain information is already set, while other information can optionally be added:

Notification: in order to generate notification events for changes in the status of the Flow Instances

Calendar and cut-off rules: in order to define SLAs on the completion, or the duration, of the instances of this flow

Other Information such as LogicalArea, Sender/Receiver Application and job, useful in the subsequent business/middleware monitoring and reporting phases (see section 3.16 - Flow Monitoring: Middleware).

At the end of the editing phase, after saving, the flow must however be validated and be extracted before it can be put into production.

As soon as this Flow is active, each new instance that is executed in the infrastructure, and then summarized in the Orchestration Suite, will be recognized as an Instance of this Flow.

Warning:

Linear file transfers, the largest class available, can only be managed through this scenario. To this linear class belongs IBM WMQ FTE Composite Flow, that are always executed in a point-to-point fashion without any fan-out.

With the current implementation, for the calculation of SLAs on completion it is necessary to wait for that the execution plans to be loaded again, and therefore the first SLAs will generally be visible after 48 hours.

IBM WMQ FTE Flow bottom-up creation

IBM WMQ FTE Composite Flows can be created using the bottom-up approach. During this phase a Flow with a single "MQFTE send" operation is created.

Page 76: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 66 EMAOSM022/01

The Item Type on which the step insists is evaluated according to the recognition criteria defined for that flow:

Item Type File Group: when a criteria like starts, ends, contains, and based on fileName, is used.

Item Type Directory: when a criteria like exactly and based on Repository/Endpoint, is used.

3.1.18 Flow SLA Configuration

Orchestration Suite offers the user the possibility to associate SLAs (cut-offs) with the execution of the flows. SLAs are configured in the Flow Definition creation/editing wizard.

For each Flow Instance summarized in the system, if recognized, the SLAs are evaluated if defined in the corresponding Flow Definition.

There are two alternative categories of configurable cut-offs for each flow:

Cut-off on duration: an upper limit is set for the overall duration of all the instances of the flow: for example, 120 '; violated cut-off status is calculated in the case where the end-to-end duration of the flow exceeds the configured limit. This feature is particularly useful in the case of service provider companies that, when they receive a file, have to process it and return it to the sender within a maximum time limit.

Cut-off on completion: a time is defined, in absolute terms, within which the flow must be completed from a business perspective: for example, 19,00. A time is also defined, in absolute terms, that is less than the completion time, beyond which the flow is considered to be in warning status, giving the operators time to focus attention on this flow and/or to intervene in order to resolve the error situation that is about to occur: for example, 18,45, if 15' is long enough to understand why the flow is late. This feature is fundamental in the case of scheduled flows, where failed completion a flow causes problems in the execution of the subsequent phases. This type of cut-off makes use of a calendar in order to define the frequency of the flows. The management of the life cycle of the calendars: import, export, definition of the validity, takes place through the WebUI.

Warning:

It is crucial that the exact same calendar defined in the scheduling product is also deployed in Orchestration Suite, and for these two calendars to be kept aligned; otherwise, misalignments and serious losses of synchronization in the summarized data could occur (see section 1.3.32 - Synchronization and Out-of-sync Management).

Page 77: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 67

Orchestration Suite supports a function to export/import a calendar in OPC format, or in a proprietary format.

For the SLA on completion, where the execution of a flow terminates the day after the start day, this option must be specified explicitly in the configuration of the SLA.

In the case of SLA on completion, this release supports the execution of a single Flow Instance per day for each Flow Definition.

In version 2.5.0 of Orchestration Suite you can view the time-based rules generated but not yet evaluated from the WebUI.

These data are useful only for understanding whether a rule has been generated and/or calculated for a specific flow. The user can filter on the Flow Name of the Flow and check whether a rule has been generated and/or evaluated for that specific flow. If the rules are present, it means that they have been loaded but not yet evaluated because Time to check is later than the current time.

The following screen shows the use case in which the list of rules is displayed:

Figure 3.4: Show Cutoff Rules

By selecting a Rule Instance you can use the View button to display its details.

Page 78: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 68 EMAOSM022/01

The following screen shows an example of a time-based rule:

Figure 3.5: Cutoff rules details

The attributes are:

Rule Instance: Instance Id of the rule

Time to check: Date and time at which the rule is triggered

Action Type: The possible values are E(rror), W(arning)

Flow Name: Name of the flow for which the rule was defined

Flow Revision Fk: Version of the flow

Calendar Name: Name of the calendar associated to the rule. This is present only if the rule is on completion and not on the duration

Calendary timestamp: date and time at which the calendar associated to the rule was changed

Timestamp: date and time at which the rule was loaded

3.1.19 Flow Recognition

During the summarizing phase, a recognition algorithm is applied to each Flow Instance, in order to verify whether it is a recurrence of one of the Flow Definitions created in the Modeler; this algorithm uses the dynamic recognition criteria previously described.

The standard recognition algorithm is able to evaluate whether the recognition criteria are ambiguous, that is, whether a Flow Instance can be associated to more than one Flow Definition: this indication is shown in the WebUI Monitor as a question mark in place of the Flow Definition Name, and all matching Flow Definition names are logged in log files. This is an optional policy, which is very useful during the human-driven discovery phase; it is suggested, for performance reasons, to turn-off this option while in production environments, once assured that the whole set of recognition criteria defined are not ambiguous in any way. To turn this policy on, go to the config directory and in the file <product_home>/config/services.properties set the property named flowRecognition.checkAmbiguousCriteria.

Page 79: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 69

When recognition criteria are ambiguous, an investigation must be carried out to identify the overlapping, and a new Discovery phase must be performed in order to further refine the criteria thus eliminating any risk of error in the recognition between Flow Instance and Flow Definition.

Composite Flow Recognition

Composite Flows are Instances of Flow Definition in the same way as Flow Instances are. They too are hence recognized against Flow Definition.

The algorithm to recognize a Composite Flow follows these rules:

A Composite Flow Instance is recognized as soon as one of its Flow Instances is recognized and evaluated as belonging to that Composite Flow Instance

The Flow Definition name is then propagated from the Flow Instance to its Composite Instance

Warning: only IBM WMQ FTE Composite Flows can be Recognized in this release.

3.1.20 Flow Status Evaluation: Business

The algorithm that evaluates the business completion of a flow intervenes once a Flow Instance has been recognized as a recurrence of a Flow Definition.

Regardless of whether the Flow Definition has been created implicitly in bottom-up mode, after a Discovery phase, or explicitly in top-down mode, it is the structure of the Flow Definition that guides the completion algorithm.

For linear Flows, it is in fact necessary for the rightmost step of all those included in the Flow Definition to be completed before the Flow Instance will be considered as complete from a business point of view.

Composite Flow Evaluation: Business

In this release business completion for Composite Flows is supported for IBM WMQ FTE flows only.

3.1.21 Flow SLA Evaluation

The evaluation of the SLA of the flows is based on the use of execution plans. The execution plans specify what must take place, that is, which flow must be completed, and when, in terms of the time frame.

The time frame, absolute, is therefore used as an event for the calculation of statuses.

Page 80: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 70 EMAOSM022/01

The execution plans are feed by means of two mechanisms: for the flows with cutoff on completion: the plan is generated considering all the SLAs defined for the active flows. The plan is generated at each activation of the product, and then daily. Every plan is generated with one day in advance; at the first startup, day0, the plan for the remaining part of day0 and for day+1 is generated; on day+1 the plan for day+2 will be generated. This policy allows correct management of the flows that are completed on day+1 in relation to the start day, day0.

Warning:

In this release, there is a latency for the evaluation of SLAs for flows put into production; if a flow is activated on day0, its SLAs on completion will be used:

• for the instance of day+2, since its SLAs are considered in the execution plan that will be generated on day+1

• for the instance of day+1, if at the point of activation calculation of the execution plan has not yet occurred.

For correct operation of mechanism for calculating the SLAs on completion, the execution plans must be generated daily, without interruption and without gaps; otherwise problems may occur in synchronization and alignment between what is planned and what is executed, as described in section 3.1.33 - Synchronization and Out-of-sync Management.

for flows with cutoff on duration: the plan is generated for each summarized Flow Instance, considering the start time, and the value of duration cutoff defined for the corresponding Flow Definition, relating it to the calculation of the cutoff on the completion of this instance

The cut-off status evaluated is shown in the WebUI Monitor and the notifications, and contributes to the calculation of the reports. There are various possible values for these SLAs:

OnSchedule specifies that the flow is correct in relation to the defined SLA.

Cutoff specifies that the flow is delayed in relation to the defined SLA, and has not therefore been completed within the defined duration or the time frame.

Warning supported only for cut-off on completion; used in order to draw the attention of the operators to a potential of cut-off situation that it about to take place.

Page 81: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 71

OnSchedule-AfterCutoff supported only for cut-off on completion; it can happen that a flow is effectively completed in the planned time, but the ActivityRecords containing this information are processed after the completion timetable, because of latencies in the infrastructure. The flow is therefore in Cutoff status when completion time is reached, and subsequently, on the arrival of the last ActivityRecord, in OnSchedule status. To distinguish and highlight this limit situation to the operators who may have already started to solve the problems, consulting the WebUI or receiving Notifications on the SNMP console, the product uses a separate status.

Composite Flow SLA Evaluation

Cut-off over completion and over duration are evaluated for Composite Flow too. In evaluating this information the following rules applies:

Composite Flow timestamp are used for cut-off evaluation

start and end timestamp for duration/relative cutoff

end timestamp for completion/absolute cutoff

The two timestamps are evaluated in this way:

Start timestamp: the lower of all start timestamps of Composite Flow Instance components, in terms of Operations and Flow Instances

End timestamp: the higher of all end timestamps of Composite Flow Instance components, in terms of Operations and Flow Instances

Cut-off are evaluated on Modeled Composite Flows even when it is not known and hence unpredictable how many Flow Instances will belong to each specific Composite Instance of that Flow daily.

The Composite Flow and all its Flow Instances have always the same cutoff state (short period may exists in which OSEngines are working and the information available on the WebUI is not yet final).

As soon as a Composite having almost one Flow Instance attached is evaluated in warning state (cutoff over completion) its warning state is propagated back to all its Flow Instances; this state change is notified when configured.

As soon as a Composite having almost one Flow Instance attached is evaluated in cutoff state (cutoff over duration or over completion) its cutoff state is propagated back to all its Flow Instances; this state change is notified when configured.

For cutoff over completion, in case of ghost Instances

when in warning state, as soon as the first Flow Instance belonging to a Composite Flow is evaluated and merged with that ghost, it is stated in warning state and its warning state is propagated to the Composite Flow, and then propagated back to all the other Flow Instances belonging to the same Composite

Page 82: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 72 EMAOSM022/01

when in cutoff state, as soon as the first Flow Instance belonging to a Composite Flow is evaluated and merged with that ghost, it is stated in cutoff state and its cutoff state is propagated to the Composite Flow, and then propagated back to all the other Flow Instances belonging to the same Composite

Each time a new Flow Instance is summarized and evaluated as belonging to a specific Composite Flow, it inherits back the cutoff status of its Composite.

Warning: not-event like cutoff over completion on planned-but-not-yet-run flows are evaluated and monitored only on Flow Instances, and not on Composite Flow Instances.

Warning: SLA evaluation is available on IBM WMQ FTE Composite Flows only, the only class of Composite Flows that can be Discovered/Modeled.

Warning: The same restriction about the number of instances that can be correctly managed on a daily basis applies to Composite Flow Instances and Flow Instances when cut-off over completion is configured: only one Instance (one Flow Instance when not belonging to a Composite, only one Composite Instance) can be executed daily when cut-off over completion are configured for a Flow.

3.1.22 Flow Status Notification: Business

Every change in business status evaluated for a Flow Instance is published to external systems, exactly as described in the section 3.1.5 - Flow Status Evaluation: Middleware on changes in middleware status.

The business events generated are:

A flow is on schedule, that is, has been completed before the cut-off time

A flow is in cut-off state, that is, has exceeded the completion time, whether a completion cutoff was defined for it, or has exceeded the duration time, whether a duration cutoff was defined for it

A flow has exceeded the warning time, only applies to cutoff on completion time

Notification of the business status obviously takes place only for recognized Flow Instances, and only for those Flow Instances that have an associated a Flow Definition with Notifications activated.

In order to activate the desired notification, operate flow by flow using the flow editing wizard.

Page 83: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 73

Warning:

In this release, the business notification and the middleware notification have the same record format; the business information evaluated in the Enrichment for this Flow Instance is not published in the notification.

In the notification of business events, compared with middleware notification, the status of the SLA and the name and version of the Flow Definition recognized as associated to the Flow Instance are specified.

3.1.23 Flow Monitoring: Business

The main features of Business Monitoring are:

monitoring Flow Instances SLA: in cut-off, on schedule, or in warning state

monitoring Flow Instances in running or complete business status

monitoring Flow Instances in error or correct middleware status

monitoring Flow Instances filtering on business properties, like Companies, Logical Area, Applications, Users

have the opportunity to access with only one click to each recognized Flow Definition.

Business Monitoring views are delivered through a Web User Interface that provides operators with a self-refreshing control panel which displays status data and Flow execution data classified in Source and Destination sections.

At each refresh, a what's new indication is shown for all records whose status has changed since previous refresh.

Business filters are available, jointly to common filters, in order to let user monitor only specific subset of Flows, according to their Common, Source or Destination properties like: Company, Application, User, Logical Area, Item, Location, Environment, job.

Only one perspective on Flow Instances summary is available:

an extended view, containing all business information classified in Common, Source and Destination sections

It is possible to access a Flow Instance detail, including all its activities, just clicking on any of its properties.

Page 84: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 74 EMAOSM022/01

3.1.24 Flow Creation: top-down

The top-down method of Flow creation is complementary to the bottom-up method:

while in the bottom-up method the user creates a Flow Definition on the basis of the Flow Instance, in the top-down method the user creates the Flow Definition from scratch

Using the WebUI.Modeler and the Flow creation wizard

while in the bottom-up method only flows with a linear structure can be created, in the top-down method there are no limits on the description of the structure that the Flows can have

Limits appear subsequently in the recognition and the calculation of the business completion of these flows

while in the bottom-up method the middleware information to deduced from the Flow Instances subjected to Discovery, in the top-down method the middleware information must have already been expressly censussed

for census of Topology and Repositories/Endpoints, refer to section 3.1.14 - MFT Topology Definition

for census of Source/Destination FileName, creating Items and deploying them on Locations/Environments, and defining specific properties of the files according to the type of Location (Mini, iSeries, mainframe)

in both the bottom-up and the top-down methods the business information must have already been censussed in the system before it can be associated to the Flow

using the use cases provided by WebUI.Modeler to create Companies, Logical Areas, Applications, Users

some information is specified directly during the Flow definition phase

Job Name (JCL, shell, bat)

SLA; specific calendars must already have been imported

Recognition criteria, fundamental in order for Flow Instances to be subsequently recognized as recurrences of this Flow Definition

Once defined and saved, a Flow enters its Governance cycle in which it can be validated, activated/extracted, modified, or withdrawn from production/deleted.

The activation of a Flow is a process that can lead to the creation of job files (JCL, sh or bat depending on the platforms specified for the Steps of the Flow) containing, for example, file transfer operations. These job files can be put into production, passing, for example, through Change Management system.

Page 85: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 75

Once a Flow Definition is activated, and the corresponding jobs are put into production, their execution will generate Data Flows which, while crossing MFT systems enabled for the production of Logs, and configured to send the Logs to the Orchestration Suite Manager, will be represented as Flow Instances within the Monitor, and recognized as instances of the Flow Definition.

3.1.25 Modeler Entities schema extension, Flow Custom Tagging

The schema of the DB and the entities of the Modeler can be extended by creating new columns for specific entities in the form of tags, using the CLI.

In this way it is possible to customize the DB according to specific characteristics of the host environment, handling attributes not supported in the base product.

It is possible to specify the values for these attributes using the WebUI; it is furthermore possible to search for specific instances of entities based on the values of these tags.

In addition, the tags are semanticized, so as that behavior can be associated to their values if required; this version supports the following:

String type: data entry, display, search.

URL type: data entry, display, navigation with opening in a new browser window.

This mechanism can be used, for example, on Flows in order to resolve problems classified freely on the part of the end users; when hundreds or thousands of flows are defined in the Modeler component of the Orchestration Suite, requirements that are typical of managing large quantities of data arise in accessing and categorizing flows:

association of customized tags to the flows

"mnemonic" rapid, personal and unstructured searches

enrichment of the metadata of flows with structured tags not supported explicitly in the product.

This mechanism is also used, for example, in order to link a flow to an external information source, or, using the URL type, to a wiki containing information that is relevant to it.

Page 86: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 76 EMAOSM022/01

3.1.26 LogicalArea-oriented Monitoring

For censussed flows, it is possible to perform monitoring on the on-line data based on LogicalAreas: the WebUI Monitor in fact offers the possibility to monitor all the activities executed in the MFT infrastructure according to summary data of flows that are executed, completed, in progress, or with an error, grouping the data by Logical Area.

In this way the operator has a direct view of the entire infrastructure and what is happening, and can use this as the basis for carrying out investigation/troubleshooting actions.

The view by LogicalArea is integrated with the view by Flows, making it possible to analyze in detail all flows in error on a given Logical Area.

This use case is available only on the on-line portion of the DB; it calculates the metrics dynamically on each query, and can cause a certain slow-down as the number of records contained in the on-line portion of the DB increases.

In this version you can view the full path of the Logical Areas in WebUI->Monitor>Logical Area.

3.1.27 Flow Creation: Duplication

The creation of a flow is an operation to be repeated for each of the flows that you wish to census, therefore even hundreds of times, depending on the adoption process established.

The flows have a similar structure to each other - they may be, for example, linear - while they differ according to other categories of information, such as the file name in question, or the application that the produces/consumes it, the Logical Area or the SLA.

In order to make the flow creation mechanism more rapid and efficient, the Orchestration Suite provides a number of use cases:

WebUI: duplication starting from an existing flow, specifying just the new name. A new flow is created, identical in structure and values to the flow on which it is based. At this point, the user can edit the flow in order to modify specific parameters. Restriction: in order not to encounter the problem of ambiguous recognition criteria, the new flow cannot have the same criteria as the previous one; the new flow is therefore created without associated criteria, which will then be explicitly defined by the user in order to have this flow effectively recognized by its instances at run time.

Page 87: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 77

WebUI: duplication starting from an existing flow, but specifying new values directly. A new flow is created from an existing flow using an inheritance approach: all the properties of the master flow are inherited, the user can explicitly redefine some of these with new values, while for other fields (for example, FlowName, Recognition Criteria), new values are mandatory.

Script-based: the definition of a flow can be exported using the CLI; the resulting text file in XML format can be edited and modified: notes exist directly in the file that describe which properties can be modified or not, and which of them have to be modified (for example, FlowName). Once modified, the XML file can be re-imported using the CLI, thereby creating a new flow. This approach can be automated, using a scripting language such as jython or ant, in order to create a great number of flows for import in an automatic, precise, repeatable, and tested manner, on the basis of templates, using, for example, the information captured in the discovery phase.

The flows created by means of duplication must however be validated and activated in order for them to be able to be recognized by the corresponding instances.

3.1.28 Flow Governance and Lifecycle Management

By Governance we mean the set of roles and procedures that allow a rich and articulated environment to be managed with order and discipline, enabling all resources and roles to carry out their activities correctly.

In particular, in an MFT infrastructure, the problems that arise are:

Which flows are being managed? Who of is in charge of them? Who can modify them?

Which SLAs must the flows in execution respect?

Who made the last modification to the PaymentOrders flow? What has been changed?

Which new flows have been put into production this month? Which have been withdrawn?

Orchestration Suite offers a set of functions to introduce Governance services into the MFT world:

Statuses and versioning: the product includes a predefined and non-extensible set of statuses, used in order to annotate Flow Definitions:

Page 88: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 78 EMAOSM022/01

New: the flow has been created and saved, its editing is not yet complete (otherwise would it would have been validated or activated). Editing a flow may require a few minutes, for example by duplicating a already existing flow, or several days, because it is essential to interact with the application managers who requested deployment of the flow and who must provide all the details of the case

Validated: the flow has been successfully validated, and is therefore ready to be put into production. The validation plug-in has been used in order to validate this Flow and the naming conventions used.

Active: the flow is considered in production; the customizable extraction plug-in has been used, if configured, in order to produce the jobs (JCL, .sh, .bat) invoking the operations of the MFT systems, and deployable in production via the Change Management systems and industrial procedures for management of the test systems, production systems etc. in accordance with the policies of the host environment

Updated: the flow, after having been activated, has been modified, thus generating a new version of flow, with a version number greater by 1 than that of the original flow, and is in editing status. The original flow remains in production, in Active status, until this version being edited is validated and activated.

Deleted: the flow has been put out of production. Its recognition criteria will no longer be used in order to recognize Flow Instances

Status transitions: carried out using the commands in Modeler WebUI

History: all the modifications are conserved over time, they are never deleted, and it is possible to access previous versions of flows, to see who modified them and, by comparison, also to understand how they have been modified

Security: very specific roles can perform Governance operations on the flows; the mapping of users to roles obviously depends on the host environment

Search: possibility to search all the flows in a given status or with a specific version.

3.1.29 Job Instantiator

Once Flows are defined in the Modeler, some of the data specified for the flow can be used to instantiate job templates (jcl, sh or bat) using specific macros.

The following macros are supported:

${flow.name} Nome of the flow

${flow.revision} Revision of the flow

${flow.la} Logical Area associated to the flow

${flow.step.i} Step of the flow

Page 89: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 79

${flow.item.i} Item associated to the step

${flow.fileName.i} File Name

${flow.repository.i} Repository

${flow.endpoint.i} EndPoint

${flow.location.i} Location

${flow.environment.i} Environment

${flow.job.i} Job

Users can create job templates using these macros inside, specifying the job template filename for each step in the flow definition, using a specific key, and when the flow is finally activate the instantiated jobs are created in the ${flow.name}/${flow.revision} directory.

This feature uses the notification mechanism, and a specific Job Instantiator Provider is provided.

<product_home>/samples/job contains a template sample.job.

The steps to be executed in order to instantiate a job are the following:

Create an instance of Job Instantiator Provider

Define a subscription with provider Job Instantiator on the Topic Topic_ModelerFlowActivate

During definition of the Flow you need to define a Job for each step specifying the name of the template you wish to use as a detail of the Job (by clicking on the Manage Detail button).

Once the flow is extracted a file will be created in

<product_home>/Dest Path/FlowName/Revision

Dest Path is defined in the Job Instantiator Provider

FlowName is the name of the flow

Revision is the version of the flow

Warning

This feature is supported only for linear flows.

A flow is linear if:

contains a step on which just one item is deployed

there must not be any branches other than a parent and not more than one leaf

for each step there is one and only one deploy

for each step there is one and only one item

Page 90: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 80 EMAOSM022/01

Warning

This feature is supported only on a flow activation event (Topic_ModelerFlowActivate).

Here is an example of how a job can be instantiated on the basis of a flow. We define an instance of a Job Instantiator Provider as follows:

Figure 3.6: Job Instantiator Instance Detail

We define a subscription with provider job-provider on the supported topic Topic_ModelerFlowActivate.

Figure 3.7: Create Notification Subscription Object Instance

From the file <product_home>/samples/job1 a file will be produced in the directory <product_home>/export/job1.

Page 91: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 81

The file job1.samples contains the following string:

filetdsp ${flow_repository.1} ${flow_endpoint.1} ${flow_filename.1}

T21 is a linear flow defined with 2 Steps: Spazio Dispatch and Spazio Tx pr4. When the flow is being defined a job is bound to the Spazio dispatch step as shown in the following screen:

Figure 3.8: Bind a Job to a Step

Figure 3.9: Define the Job

The details must be inserted by selecting the Manage Details button and inserting as a value the name of the template to be used for instantiating its values. In our case the template is job1.

Figure 3.10: Manage Detail for the Job

Once flow T21 is extracted the file Dispatch.bat will be produced in the directory <product_home>/export/job1/T21/1.

The file Dispatch.bat contains the following string:

filetdsp SPBBP SP.TO.SSH T21

where the macros ${flow_repository.1} ${flow_endpoint.1} ${flow_filename.1}

Page 92: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 82 EMAOSM022/01

have been substituted by the values of the flow in question.

3.1.30 Flow Governance Notification

Some events of Governance on Flows are so significant that it makes sense to notify them to appropriate recipients, for example in order to inform the producers and consumers of a flow that their flow has been activated and put into production, or that their flow is been withdrawn from production, or that a new version of a flow is now available.

This happens through the usual Orchestration Suite Notification mechanism.

From a technology perspective, a brand new component has been integrated, ActiveMQ, open source pub-sub engines; it is used for notification purposes.

The Notification Object Model has been extended, and now includes:

Topics

Providers

Filters

Formatters

Subscriptions

All the configuration and management for these objects is performed via WebUI.

3.1.30.1 Notification Providers

The types of Provider available are:

Mail Provider

To notify the events via e-mail. Macros are supported in the To, Cc and Bcc fields in order to forward the e-mail to destinations instantiated according to the data specified

WMQ Provider

Brand new provider, supports SSL and client connection

SNMP Provider

To notify the events to operators using SNMP compliant products (Tivoli, HP, BMC, etc.)

FileSystem Provider

To notify in a specific fileName

Windows Event Provider

To notify the events in Windows Event Viewer

Custom Provider

Provider defined by the user by means of a Java class

Page 93: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 83

All the providers, except for SNMP and Custom Providers, have the attribute message content . The information to be notified can be customized based on the value set for this attribute.

The possible values are:

header: the notified information is only control information in a fixed format. For example: the time that the notification was produced, the topic notified.

payload: the information notified relates to the data of the notification in XML format. The format varies according to the topic notified.

body: This attribute can be used only for a Mail provider. The notification is contextualized by including the content of the file specified by the user in the body file Name attribute of the provider

attach: This attribute can be used only for to Mail provider and for the topic Topic_ReportGeneration. If specified the report produced is sent as an attachment to the mail.

These attributes can be combined with each other in various ways depending on which provider is selected, as shown in the following table.

Mail Provider

The attributes supported for this provider make it possible to customize the recipient of the notification e-mail (of which there can be more than one), the Subject and the body of the e-mail.

To customize the body of the e-mail you can define your own text file, an example of which is provided in

<product_home>/samples/notification/mailBody/mailBodySample.txt

the content of the file is then included in the body of the e-mail.

Macros can be used for event types related to instances to which a flow is associated or related to the management a flow definition. The topics in question are all those with the prefix :

Topic_MonitorFlow

Topic_ModelerFlow

Please refer to the Appendix for a list of Topics.

Page 94: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 84 EMAOSM022/01

The macros can be used to resolve senders and receivers and the subject of the notification e-mail.

The available macros are ${flow.senders.users}

${flow.receivers.users}

${flow.name}

${flow.revision}

${flow.la.mgrs}

The following is an example of a screen for a Mail Provider instance that uses macros in the allowed fields:

Figure 3.11: Provider Mail Instance Detail

WMQ Provider is associated to a WMQ Connector.

The specific attributes of this provider are:

Queue Name: the target queue to which the notifications are sent.

Event Transport Connector: the WMQ connector.

Page 95: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 85

The following is an example of creating a WMQ Provider:

Figure 3.12: Create WMQ Provider

WMQ Connection is an OSMgr configuration entity which holds the details of WMQ connections established by Orchestration Suite Engines at runtime.

In order to alter the default WMQ Connection settings follow these steps:

Navigate to Configuration > Event Transport Connector > WMQ Connection > Manage

Click Search

Select WMQ JMS Connection Factory_listener and click Edit.

The configuration panel below is shown.

Figure 3.13: Create WMQ Connection Instance

Mandatory customizations required:

Customization of WMQ QMgr to be used (QueueManagerName)

Page 96: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 86 EMAOSM022/01

Typical customizations required:

Selection of a WMQ Bindings connection (TransportType=0) or WMQ Client connection (TransportType=1)

If WMQ Client connection is selected, you can choose between clear text connection (SSLEnabled=false) and encrypted connection (SSLEnabled=true)

If WMQ encrypted Client connection is selected you must specify SSL configuration details such as SSLCypherSuite (e.g. SSL_RSA_WITH_3DES_EDE_CBC_SHA), Peer Name, Trust Store JKS file and associated password, Key Store JKS file and associated password. For more details about these parameters please refer to WMQ literature.

SNMP Provider

By selecting this provider you request notifications to be sent via SNMP Traps. The attributes are:

ip: IP address of the SNMP manager that is to receive the notifications.

port: the port on which the SNMP manager is listening.

The following screen shows an example of Provider configuration.

Figure 3.14: Create SNMP Provider Object Instance

FileSystem Provider

By selecting this provider you request notifications to be sent to the file whose name is specified by the user.

The following screen shows an example of a FileSystem Provider.

The notification, which will contain both the header and the payload, will be notified in the file whose value is specified in the field fileName

Page 97: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 87

Figure 3.15: Create Provider FileSystem Instance Detail

Windows Event Provider

By selecting this provider you request notifications to be sent to the Windows Event Viewer.

Warning

The following screen shows an example of Windows Event Provider configuration:

Figure 3.16: Create Windows Event Provider

3.1.30.2 Notification Formatters

Formatters are used for transforming the payload of the notification.

The types of Formatter are:

Generic XSLT Formatter: the formatter instances made available by OrchSuite are:

FlowDefinition_Generic_Formatter

Page 98: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 88 EMAOSM022/01

FlowInstance_Generic_Formatter

FlowInstance_SNMP_Formatter

User_Generic_Formatter

It is possible to create a customized instance of Generic XSLT formatter by defining a new XSL file

Custom Formatter the user can define a formatter for transforming the payload of the notification

The name of the class developed by the user must be specified during the configuration phase.

3.1.30.3 Notification Filters

Notification Filters are used in order to select the Notifications that you wish to be forwarded.

If no filter is specified in the subscription, the notification will be forwarded.

If a filter is specified, the notification will be forwarded if it satisfies the filter in question.

The filter types available are:

Custom Filter

A filter defined by the user through the implementation of a Java class.

The Custom Filter "FlowNotificationFilter" is distributed with the product.

This filter is to be used for subscriptions relating to topics that have the prefix.

Topic_MonitorFlow

During the definition of the flow the user can specify for which flow statuses a notification is to be requested.

The FlowNotificationFilter is implemented as follows:

In the case of a recognized instance of a modeled flow, it uses the settings defined by the user during the modeling phase.

If the instance is not recognized it uses the default settings

The default settings can be displayed and modified using the following administration utility:

<product_home>/bin/runtime/sposcsh

By invoking the helpNotification() method you can identify the functions available.

Page 99: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 89

Metadata Filter

This filter uses the value set by the user in the Topic Metadata attribute in order to send the notification or not.

This filter can be applied only on subscriptions relating to the following topics:

Topic_ReportGenerator

and all those whose prefix is Topic_ModelerFlow

The values that can be specified in the Topic Metadata attribute are:

reportName= reportName_sample;

flowName=flowName_sample;

Warning:

The final ";" is mandatory.

For example, if a user has defined a subscription for Topic_ReportGenerator and has specified Metadata filter, and has inserted the following value in the Topic Metadata attribute:

reportName= Flow_summary;

of all the reports generated, notifications will be forwarded only for the report called Flow_summary.

Generic XSLT Filter

This filter uses an XSLT filter created by the user. It allows you to filter notifications without having to develop Java code, using instead an XSLT file applied to the payload.

This XSLT must return a value that is compared with the one set in the field Result Value in the instance of Generic XSLT Filter. The notification is forwarded if the filter is satisfied, in other words, if the comparison has a positive result.

In <product_home>/xsl/xslFilterSample.xsl there is an example of XSLT used by the Generic XSLT Filter: "GenericXSLT_Filter_sample".

Warning: The Notification function is available also when a Monitor only license is in use; in this case, it is not possible to filter at the Flow Definition level whether or not publish a notification.

Nevertheless, is possible to filter notification at the error granularity level, using the sposcsh command line to activate it.

In Monitor + Modeler scenario, the sposcsh activation can be used for all the non modeled flows too.

Page 100: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 90 EMAOSM022/01

3.1.30.4 Notification Subscriptions

To subscribe to an event you need to create a subscription.

When you create a subscription a queue is created on ActiveMQ on which one or more consumers are listening.

The attributes of a subscription are

Topic : represents the event to which you wish top subscribe. Please refer to Appendix D

Notification Provider : represents the channel over which the event is notified. For further details refer to section 1.3.2.1 -Notification Providers

Notification Formatter : specifies the formatting to be applied to the notification. For further details refer to section 1.3.2.2 - Notification Formatters

Notification Filter: specifies the filter to be applied. For further details refer to section 1.3.2.3 - Notification Filters

threadNr: number of consumers that work in parallel on the notifications received from this subscription.

retryInt : the interval in seconds that consumer is to wait before retrying to consume the message.

retryNr: the consumer retries number.

Important Note

In order to make any changes to subscriptions effective, you need to restart OSEngines using the scripts <product_home>/bin/runtime/stopOSengines(.bat/.sh)

<product_home>/bin/runtime/startOSengines(.bat/.sh) or, if launched as a service, you must stop and start the service.

3.1.30.5 A use case

The following is a use case example for using notifications.

Suppose that the user wants a notification on several channels in response to an event of the type "Flow in Cutoff on duration". In particular, he wants a notification via e-mail to be sent to the flow area manager John Smith and a notification on the file system. The following are the steps he must perform.

Page 101: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 91

Step 1- Define the providers

Define the Mail Provider as follows

Figure 3.17: Create a Mail provider

For the mail provider we have specified the recipient, the subject and the body.

For the message content type we specified "payload+body" which means that the e-mail will consist of the content of the user-specified file mailBodySamplexCutoff and the actual notification of the event.

Define the File System Provider as follows:

Figure 3.18: Create a Provider Filesystem

The name of the file that will contain the notification is specified in the fileName attribute.

Step2- Define the subscriptions

The following screen shows the definition of such a subscription:

The name of the subscription is sub_provider_mail, on the previously created provider called email_provider, on the topic Topic_MonitoFlowCutOffDuration using the formatter FlowInstance_Generic_Formatter and without specifying any filter.

Page 102: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 92 EMAOSM022/01

Figure 3.19: Create a Notification Subscription Object Instance

The following screen shows the definition of a subscription on a FileSystem provider:

Figure 3.20: Create a Notification Subscription Object Instance

As soon as the flow is evaluated as being in a cut-off state, an e-mail will be sent and the notification in question will be written to the file <product_home>/logs/OrchSuiteSystemEvents.log.

3.1.31 Heartbeat

The aim of heartbeat is to send a ping message (Heartbeat) to the Orchestration Suite server node as Activity Data for the following reasons:

to signal to OrchSuite that the agent is up and running

to communicate the state of the agent (enabled/disabled)

The OSAgent and the Spazio Agent send a message to Orchestration Suite Manager on a time-interval basis.

Page 103: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 93

The following screen shows the use case that allows the Heartbeat messages received by Orchestration to be displayed.

There are two types of view. If you select Last only the last heartbeat received for each Agent will be displayed:

The user must set the interval, in seconds, within which the heartbeat is expected to be received.

The red/green color of the Elapsed Time field will indicate if the message has been received by Orchsuite beyond/within the specified interval.

If you select All then all the messages received in the specified period will be displayed according to the filter set.

In the example Elapsed Time is red because Orchestration has not received any pin messages from Agent 192.168.7.177 for 16 hours.

Figure 3.21: Activity Data

When you select the Agent Instance a window opens that displays the heartbeat.

3.2 Orchestration Suite Services

3.2.1 Cleaner

For reasons of efficiency, support for persistence of monitoring data has therefore been split into two logical schemes:

OnLine DB: contains all the data collected from the MFT/FT products. It must be used for monitoring and troubleshooting purposes only. That data must be kept in the OnLine DB portion only for the time strictly necessary to carry out monitoring and troubleshooting. Should the customer need to collect data for longest period than the few days needed to monitor and solve problems and issues, that data must be moved to the History DB portion.

Page 104: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 94 EMAOSM022/01

Warning: Should large amount of data be kept inside this online portion, serious problems may arise to the performance of all the OSManager Engines in collecting, summarizing, notifying, introducing latency that is not compatible with customer requirements

Tools exists to automatically perform OnLine DB cleaning tasks:

Cleaner Engine - starts automatically on a configurable time-basis, and performs data deletion/moving/extraction phases; cf. config/OSEngine.properties in WebUI Configuration->Cleaner for details

Data in the OnLine DB portion is accessed using the WebUI OnLine Monitor by all the operators assigned for monitoring of the MFT infrastructure.

History DB: contains all the data that, once monitored in the OnLine portion, must be kept for longest period of time for auditing and compliance reasons, to perform analyses, searches, to respond to disputes; this time depends of course on customer current policies on auditing and compliance.

History DB data is accessed by WebUI Monitor History Browsing.

Data in the HistoryDB portion can finally be deleted, or extracted on flat file for further archiving on 3rdp products.

The OSManager includes functions, performed periodically and automatically, for managing the information stored in the two portions of DB; the actions that can be configured are:

Deletion of the flows directly from the OnLine DB

Archiving of the flows from the OnLine DB to the History DB

Deletion of the flows from the History DB

Extraction of the flows from the History DB to a flat file

Deletion of the flow data extracted on the file system.

The policies that govern these actions are based on the status of the Flow Instances, and offer the user the possibility to manage the persistent data in accordance with the business processes and procedures in place in the host environment; in particular, the statuses considered are:

Flow Complete: Related to Flow instances and composite that have a complete status, does not have middleware or time-based errors and NOT in a claim state (without Governance notes or with a closed state note).

Flow Error : Related to Flow instances and composite that have a completed or running state, have some error (middleware or time based) and are not in a claim state (without Governance notes or with a closed state note).

Page 105: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 95

Flow Running: Related to Flow instances and composite that have a - have a running state, does not have middleware or time-based errors or are in a claim state (with a Governance note in an open state).

Uncorrelated Activities: Related to any other uncorrelated activity.

Upload -Flow Complete: Report Table Loading used while copying Flow instances from the OnLine DB portion to the public Report table. Applies to Flow instances in the OnLine DB portion that have a complete state.

Upload Flow Running: Report Table Loading used while copying Flow instances from the OnLine DB portion to the public Report table.

WMQ files: Applies to activity files received through the WMQ Listener Configure this to clean all files stored in DeadLetterDir, TraceDir, DuplicateRecordDir.

Heartbeat: Related to all heartbeat records sent by OSAgent or Spazio Agent and stored inside the DB.

The policies for managing persistent data can be executed at various time intervals, for example 2h or 24h.

The size of the interval determines the quantity of the data included in the processing, for example with a 2h interval the data processed in a window of 2h are included; with a 24h interval the data processed in a full day are included.

The necessary time for the execution of the policies varies with variations in the size of the window and the amount of data that is considered within the window.

The persistent data management functions work directly on same OnLine DB on which all the loading and summary functions work, and carry out data movements in transaction: during their execution, however, a number of conflicts are generated causing:

lengthening of the times necessary for the management of OnLine DB

degradation of the performances of the loading and summarizing functions, with potential accumulation of a delay.

The execution timetable of these policies, configurable, must in addition be calibrated with the trend of the file transfer traffic over time: many file transfers take place for example at night, when many batch processes are historically carried out.

The size of the data window on which the policies are to be applied is therefore one of the parameters to be configured so that the entire system performs according to expectations and allows monitoring of the infrastructure within the established SLAs.

The following screen shows a list of the policies of the Cleaner and the details of the Flow Complete policy:

Page 106: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 96 EMAOSM022/01

Figure 3.22: Cleaner-Flow Complete Instance Detail

The common attributes of the policies are:

online retention (days): Number of days a specific flow type is kept in the Online DB portion When -1, this policy is not executed.

history retention(days): Number of days a specific flow type is kept in the History DB portion but deleted after " online retention " days. When -1, flows are never archived in History DB.

final operation: D(elete), E(xtract and Delete). Data in the History DB portion can be deleted or extracted on file system and then deleted.

3.2.2 Reporting

An MFT infrastructure produces great quantities of data on the operations performed, in terms of:

Dimensions involved such as companies, locations/repositories, environments, applications, users, jobs, items/files, Flows, Logical Areas, Protocols.

Status of the flows such as running/completed, in error or not, delayed or not, never executed.

On this quantity of data various analyses can be carried out, with the aim of identifying metrics useful for the various persons in charge and actors, with business or IT roles, involved in an MFT infrastructure, in order to provide elements that can help in:

Periodically analyzing the state of the art of the infrastructure by assessing the entities involved and their trends.

Identifying critical points of the infrastructure such as excessive consumption, bottlenecks, delicate points that cause recurring problems, analyzing time distributions and trends.

Page 107: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 97

Assessing compliance with the defined SLAs with statistics and trends.

Taking decisions on savings, by helping to identify consumptions, obsolete flows still in operation, potential functional or architectural optimizations to be introduced.

Analyzing the status of the security of the MFT infrastructure by evaluating NON- secure and secure points, and the traffic that takes place on each of these.

Orchestration Suite provides a reporting function characterized by:

Use of an open and expandable system, based on the BIRT open source tool, included in the Orchestration Suite product, and on its report production environment, NOT included in the Orchestration Suite product.

A base report library distributed as part of the product, to be configured during the installation phase.

The possibility for the user to expand the report library by creating his own templates using the BIRT development environment (to be downloaded and installed separately) and including the custom reports, following the same procedure as for those distributed.

A specific open, documented and supported table, used by the report templates distributed in the product and also usable by the report templates developed directly by the user. This table has associated loading/cleaning policies, to be configured in the product installation phase.

Scripts, interactive and batch, for the generation of reports from templates, in order to produce the PDF files that are deposited in specific directories, or sent via e-mail to specified recipients, and for the management of their life cycle: import, list, deletion, export.

Some examples of report types distributed with the product are:

Dashboard: it contains metrics on dimensions of the IT and Business types, with the total (and the distribution of the total in the time interval) of entities (Companies, Applications, Items, Logical Areas, Files etc) involved in file transfer activity, and which have file transfers in error, delayed, on schedule, completed.

Summary: for each dimension these allow an analysis of the instances of that dimension (for example, Company, Repository, Logical Area,…), and the metrics of file transfers that are completed, with an error, delayed etc. in relation to each instance.

Top: a report for each metric (errors, expired, file size, total traffic size, distance time, number of files,…) with a report of the metric in relation to the various dimensions.

Page 108: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 98 EMAOSM022/01

The following are the steps to be performed in order to generate the reports:

Load the report templates by running the command <product_home>/bin/runtime/loadReportTemplate(.bat/.sh)

Configure policies appropriately in WebUI

Configuration->Report->Upload-Flow Complete

Configuration->Report->Upload-Flow Running

Invoke the utility Cleaner. For further details refer to section on the Cleaner.

Warning: when invoking the Cleaner command, all its policies are executed, even those relative to archiving data from online to history DB, deleting from online, deleting or extracting from history portion. If only the report policy is to be executed, turn off all the other policies using the "-1" in "online retention (days)" text box. Please refer to the Cleaner configuration section

If you want to schedule report generation you need to access the WebUI in Configuration->Report->Execution->Manage

For setting the scheduling rules (in schedInfo) please refer to Appendix B (Regular expressions for configuring automation). In the following screen you can see an example of configuration for the generation of a "dashboard" report on last the 7 days ("last=7") scheduled every Sunday at 23:00 (schedInfo="0 0 23 ? * SUN")

Figure 3.23: Object Execution Instance Detail

If you wish to generate the report immediately you need to access the WebUI in Report->Report.

and click on the View button.

Page 109: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 99

Figure 3.24: Report generation

The reports generated will be stored in the default directory

<product_home>/export/report/pdf

You can specify a different directory by configuring the property

"reporting.output.pdf" in the file <product_home>/config/orchsuite.properties

In order to have reports correctly configured the following prerequisites have to be considered:

on Unix and Linux platforms, it is mandatory to have an X-Windows system installed and configured

when using DB2 or Oracle, on either Windows, Unix and Linux platforms, it is mandatory to deploy the DB driver inside the following directory: \products\birt-runtime-2_2_2\ReportEngine\plugins\ org.eclipse.birt.report.data.oda.jdbc_2.2.2.r22x_v20071206\drivers

NOTE: for backward compatibility, the dbBrowser function is retained, although deprecated, in this release.

3.2.3 Synchronization and Out-of-sync Management

Once the product has been installed and configured and is functioning correctly, the first flows will be censussed, and the corresponding SLAs will be associated.

In the case of a flow with an associated cut-off with a completion schedule, this will have to be negotiated with the scheduling office that is managing, for example, the jobs and corresponding schedulings related to the consumption of this flow.

The entire execution calendar of the flow will have to be negotiated between the two offices, and must be the same for both, and all modifications to the calendar made on the schedulers must be reported in the Orchestration Suite.

In fact, the plans on what ought to happen, described in the Orchestration Suite, and the actual executions that take place in the MFT infrastructure and are guided, for example, by scheduling products, must be maintained absolutely consistent.

Page 110: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 100 EMAOSM022/01

Otherwise, not being able to associate any execution with its plan in a deterministic way, in the event of inconsistencies in the definitions or errors in the executions error situations may arise, such as for example:

A flow appears planned in the Orchestration Suite, but it is never executed in reality:

The Orchestration Suite calendar is not updated, the flow is erroneously planned for that day, but it will never run it is not scheduled in reality

The Orchestration Suite calendar is correct, but the flow is cancelled for technical reasons by those in charge of scheduling and it will never run for that day

The Orchestration Suite calendar is correct, but the flow is so delayed that its execution will slide over to the next day

A flow is NOT planned in the Orchestration Suite, but it is executed in reality:

The Orchestration Suite calendar is not updated, the flow appears erroneously NOT planned for that day, but it will however be executed because it is scheduled in reality

The flow is NOT planned, but it is however executed

The end effect of the previous cases is that the synchrony is lost between what is planned in the Orchestration Suite and what is really happening in the MFT infrastructure, causing, for example, incorrect assessment, from that moment on, of all the instances of a flow as being delayed by a day, or early, depending on the type of malfunction encountered.

This problem arises only for flows that have a cut-off defined on completion.

Orchestration Suite offers various functions, to be executed manually, for returning the system to functioning that is correct and consistent with what has been executed in reality:

IGNORE, available in WebUI Monitor, to be used in order to bring an individual instance that has experienced problems back into synchronization

In the case of a flow that is NOT planned in Orchestration Suite, but is carried out in reality, if the verb is applied on the instance the cutoff is not calculated, postponing synchronization between plan and execution to the next planned instance

In the case of a flow planned in Orchestration Suite, but NOT�executed in reality, by applying the verb on the ghost instance, synchronization is postponed until the next planned instance

SYNC, available in CLI

In the case of serious problems occurring in the infrastructure, and a very high number of out of sequence instances, it is necessary to perform synchronization on the entire planning of all the flows.

Page 111: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 101

In this mode, only the new Flow Instances that enter the system will be considered for synchronization between execution and plan. Invoking SYNC may not resolve all the problems, and fine repairs using IGNORE directly may become necessary.

Once synchrony is restored between planned operations and scheduled operations, this must be maintained, respecting the following best practices:

Plans must NEVER be interrupted: there must not be any planning window in the Orchestration Suite, as this may cause a loss of synchrony. Gaps in planning windows can be causes by improper configuration or execution of the engines of the plan loading engines (RuleLoader).

Warning: when out-of-sync situations have to be solved relative to a specific Flow, it may help to perform a search in the WebUI.Monitor.Flow filtering using the Flow Definition name, using the adequate time interval, in order to collect all Flow Instances that have to be ignored once a correct situation is re-established.

3.2.4 Extension points and integration points

Orchestration Suite is a product that plays a central role in a modern data processing center for everything related to definition, management and control of activities inherent to file transfers.

It must therefore be possible to integrate Orchestration Suite with other products and components that already form part of the information system in which the product is adopted.

Therefore, there are a number of extension and customization points that allow the product to be integrated, or its features to be extended where those provided out-of-the-box are not sufficient.

Flow Definition tagging it is possible to enrich flows with free metadata, defined in the host environment according to a taxonomy or folksonomy; it is also possible to associate URLs that point to collaboration tools, such as wikis, or to portals or tools for the publication of business content in which information related to this flow is already published.

Flow import/export the definition of a flow can be created, on the basis of an existing definition, using a copy/paste mechanism based on the export-customize-import paradigm; this mechanism, which can be automated, is a point of integration of the Orchestration Suite with existing subsystems in the host infrastructure, such as Change Request tools.

Notification there are various channels on which events can be notified, both of monitoring and governance.

Notification There is a purposely distributed SDK that allows specific notification channels to be implemented, in the case where those included are not sufficient.

Page 112: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 102 EMAOSM022/01

Security the product can easily be extended in order to perform, when security is activated, authentication checks on User Registries other than the DB-based registry included in the product.

Validation the product can easily be extended to customize the algorithm for the validation of flows according to naming conventions and specific characteristics of the host environment

Activation the product can easily be extended to customize the algorithm for the activation/extraction of flows according to naming conventions and specific characteristics of the host environment

Reporting customers can create their own reports and integrate them in the product in order to be able to use them exactly like those included out-of-the-box; a persistent table, with a documented and supported schema, is provided for the creation of custom reports; it is obviously also possible to use third party reporting tools already in production in the host environment for the creation and generation of reports.

Recognition the algorithm that performs matching between Flow Instances and Flow Definitions through the recognition criteria can be customized in order to make it more efficient by adapting it to specific needs of the deployment environment, using knowledge of the coding and naming convention rules adopted.

Calendar import/export the plug-in that recognizes the format of the Tivoli Workload Scheduler calendar can be easily customized to recognize and parse other calendar formats.

Trouble-ticketing in response to errors in the flows and in the infrastructure: the customers use processes and systems for defect tracking or trouble ticketing; using the open fields of the Flow Instance Runtime Governance Notes it is possible to integrate flow management record in the Orchestration Suite with the case management specifications in the tool used.

Modeler schema extension the schema of the Modeler DB, provided in the product and installed with the Manager component, is predefined, but the schemas of the supported entities can be extended with specific metadata of the host environment by associating sets of keys (tags) to the entity during the product setup phase, and specifying values for those tags directly in the WebUI; these values can be used in phase of searching for specific instances of each entity.

3.2.5 Infrastructure overall performance and tuning

The Orchestration Suite is a product aimed mainly at monitoring the Flows, their middleware status, and their business status in terms of SLAs.

The latency in the calculation of the statuses is therefore an important parameter, where latency refers to the time that elapses between the moment at which an event takes place in the infrastructure and the moment at which the corresponding status is calculated in the product.

There are many parameters that are combined in order to calculate of the overall latency.

Page 113: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 103

An infrastructure based on Orchestration Suite has a queue-based architecture, made up of various cascading queues, and like all queue-based systems its overall operation is based on parameters such as:

speed of input to the queue

speed of output from the queue

average time spent in the queue

depth of the queue.

The most important queues, physical or logical, in the infrastructure are:

Agent: queue for the files to be sent to the Manager

Manager: queue for the Activity Files received by the AgentsManager: queue for the Activity Records received by the Agents that have not yet been summarized

Manager: queue for the Activity Records received by the Agents and that have been summarized

Manager: queue for the Flows

Manager: queue of the data in History DB.

All the queues must be functioning correctly to avoid the creation of bottlenecks and therefore delays and latency in the system.

MFT/FT traffics workload can obviously vary according to several factors like internal/external traffic, scheduled/not scheduled, the kind of data that is moved inside the infrastructure, nightly/daily elaboration cycles, day in the week/day in month/day in year patterns and so on. Peaks of data arrival may appear in a predictable or unpredictable way in the OSManager, due to planned situations, or unplanned situations like errors/blocks in the log capturing/transfer infrastructural that are suddenly removed.

The system must be calibrated to work normally and without delays in the case of the maximum load envisaged for the infrastructure.

In the case of spurious bottlenecks, due, for example, to blocks in Agents or Manager components, or to an increase in traffic - planned or otherwise - in the MFT infrastructure, and therefore the generation of delays not envisaged but contained in the Orchestration Suite, the system is able to compensate and recover in the medium terms, but until the delay is completely absorbed all the records included in the bottleneck could suffer a delay, and therefore, for example, the calculation of their SLAs; for delays not contained, increase of throughput in the infrastructure, or a system not properly configured for overall performance and throughput, there is a risk of divergence in the latency.

There are many parameters to be considered for overall tuning:

number of nodes with Agents on board

number of business file transfers that take place on these nodes, in input or output, and their distribution over the day/week

Page 114: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 104 EMAOSM022/01

number of Activity Files sent by the Agent Nodes to the Manager Node

Parameters

The maximum size of the Activity File

Maximum time limit before sending a new file

Best Practice

Few files with many records create a better throughput (but worse latency) than many files with few records

number of days' presence in the OnLine DB configured for the Flows

the number of days determines the number of records present in the DB, and, as a result, the cost of execution for the algorithms and the batch windows

number of users who access the system for Monitoring simultaneously and continuously

sizing and planning of the batch windows

Functions such as Cleaner, Report Uploader, cause the execution of batch and transactional windows of a significant size, according to the quantity of data on which they operate, on the OnLine DB

diluted and small size execution with a batch window in the order of a few hours can provide greater benefits than concentrated execution with a batch window in the order of a day

overall latency that desired in the monitoring

business latency that exists in some MFT systems, when a file can arrive in queue and be acquired within 15 days.

Recommendations

Use a correctly dimensioned RDBMS such as IBM DB2 or Oracle

Manage carefully the cleaning of all persistent data, and tune the RDBMS itself, according to the best practices and recommendations in section 3.1.12 - Database and persistent data management

Spread the execution of the cleaning batch windows throughout the day, rather than concentrating them fixed hours during the night

3.3 Typical usage scenarios

According to the Data Moving management requirements of the target environment, the Orchestration Suite and its modules can be licensed, installed and configured to provide different levels of functionalities.

Each of the following scenarios is available in Orchestration Suite Enterprise Edition.

Page 115: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 105

3.3.1 Basic Monitoring

Basic Monitoring is an Orchestration Suite scenario in which users can quickly deploy the whole infrastructure and having end-to-end flow monitoring available in minutes.

The following figure illustrates the simplest and quick use of the Orchestration Suite, suitable when the requirements are:

Monitoring of the flows status end-to-end

Monitoring of different protocols

Notification of the errors

Periodic report creation on middleware information

Figure 3.25

From a license perspective, the basic monitoring scenario can be exploited when either a Monitor only or Modeler and Monitor license is active; it applies to all flows that have not been modeled when Modeler license is available.

Perform the following actions in order to implement a Basic Monitoring scenario:

Install and configure Orchestration Suite Manager

Install and configure Orchestration Suite Agents

Configure the Manager persistent data lifecycle policies, in order to have auditing data stored and maintained according to your company policies

Optional: Configure notification, when integration with runtime monitoring products is needed

Optional: Configure reporting, to produce dashboard or statistical reports, exploiting the included templates

Start the whole infrastructure: MFT infrastructure to be monitored, Agents, Manager and all its components.

Page 116: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 106 EMAOSM022/01

Use the Web UI Interface to Monitor flow middleware status, repository-oriented infrastructure status, activities status.

Optional: Use Runtime Governance Notes in order to let Operator enrich in-error flow status with their own troubleshooting actions and information.

3.3.2 Discovery

Usually, in an MFT infrastructure, flows can have several periodicities, according to the business processes they are part-of: they can be have daily recurrences as well as weekly or monthly or maybe yearly.

The Discovery Scenario is an human-centered phase in which users analyze Flows ran periodically and try to identify recurrences, and helps to address the following requirements:

how can I identify all recurrences of each Flow running in my infrastructure, even if they run once a year?

are there naming conventions used in the infrastructure, which and why?

how can I be sure that each and every Flow recurrence has been identified?

It is a heuristic process, based on a long running process, as long each recurrence of the interesting flows has been executed in the infrastructure.

The Orchestration Suite helps in completing this phase by providing access to correlated Flow Instances jointly with some use cases that help users in identifying Flows by analyzing their recurrences.

From a license perspective, the Discovery scenario requires a Monitor and Modeler license.

Perform the following actions in order to implement a Discovery scenario:

Mandatory: start the Basic Monitoring scenario, and let Activity Records enter into the system and the product correlate them into FlowInstances

Using the WebUI, analyze the correlated Flow Instances in order to find recurrences applying aggregation rules like "exactly this value" or "the name has this prefix" to fields like Source/Destination Repository/Endpoint, Source/Destination FileName.

Once identified a matching rule, save them and the corresponding values as recognition criteria.

Apply periodically the recognition criteria against all the Flow Instances entered into the system after the last Discovery phase has been completed, in order to have these bound to their respective siblings.

Continue to identify and create and apply matching rules until all the desired Flows and their recurrences are discovered.

Page 117: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 107

3.3.3 Advanced Monitoring

Advanced Monitoring consists of a good-enough phase in which, after having discovered flows through heuristic recurrence analysis and having created recognition criteria, Flow Definitions are quickly created and activated using a bottom-up approach in which users specify flow SLA, calendar-based cut-offs and event notification.

It helps providing a response to questions/requirements such as:

Which flows are late with respect to their schedule?

Are there any data flows in error in my infrastructure, and why?

Are there planned flows that have never run?

Can I receive an alert before the planned completion time when flows are still running?

Are there in-error flows stopped somewhere, and how many steps before their completion?

Can I receive status notification for in-error or delayed flows?

Can I receive periodic reports on my MFT infrastructure business information?

From a license perspective, the advanced monitoring scenario requires a Monitor and Modeler license.

It is usually used to enrich the Basic Monitoring scenario: flows are defined in the Modeler in order to have time-based rules evaluated for them, or selective notification enabled for them; for any other flow running in the infrastructure and not defined in the Modeler, Basic Scenario features applies.

Figure 3.26

Two roles are typically involved in this scenario; the actions that each has to perform in order to match the Advanced Monitoring goals are:

Page 118: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 108 EMAOSM022/01

Data Moving Specialist: the user who discovers or defines the flow:

Mandatory: define into the system the MFT topology, in terms of Companies, Locations, Environments, Repositories and Endpoints

• implicit topology creation: through default values for Company, Location and Environment, and values derived during the Flow Discovery phase for Repositories and Endpoints.

• explicit topology creation: through WebUI (Topology) and CLI (Repository/Endpoint).

Mandatory: create Flow Definitions

• implicit flow discovery: using a bottom-up approach, in which starting from the created recognition criteria Flow Definitions are created into the Modeler.

• explicit flow creation: using a top-down approach, in which Flow Definitions can be defined from scratch, without considering their running instances. In this case recognition criteria have to be defined, in order to let this Flow Definition match with all the Flow Instance recurrences that will ever run.

Suggested: import into Orchestration Suite the calendars extracted directly from schedulers.

Suggested: define SLA for this Flow, based on its completion time and an imported calendar, or on its duration time.

Optional: set business information for a flow: Sender/Receivers, in terms of users or applications, LogicalArea, for reporting and access control restriction reasons.

Optional: enable event notification selectively

Operator: the user who oversees running flow status

Use the Web UI Interface to Monitor flow middleware status, repository-oriented infrastructure status, activities status.

Use the Web UI Interface to Monitor flow business status (SLA), logical area-oriented infrastructure status.

Optional: Use Runtime Governance Notes in order to let Operator enrich in-error flow status with their own troubleshooting actions and information.

Page 119: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 109

3.3.4 Advanced Modeling and Governance

This scenario helps the customer to achieve full governance and control over the whole file transfer infrastructure; it helps in addressing requirements like:

Which flows are in production state in my infrastructure right now, who is the manager of each one of them, which topology nodes are involved by it, who sends/receives it?

Who has updated Flow_B? When and why?

Which flows run on node A of my topology?

Which flows are involved by job_1?

Can I group flows according to my organization structure, and have different managers monitor in a secure and restricted way only flows related to their business unit?

Which Flows do I exchange with Company_1, how can I give them direct access to my Orchestration Suite installation to let them monitor their own flows by themselves in a secure and restricted way?

Can I generate jobs automatically according to the description defined for a Flow, and deploy them in production stage?

My Company is going to merge with a partner, and we have to quickly integrate our information systems, both in the online and in the batch worlds. How can I clone the flows in my infrastructure in order to include and consider even the new modules?

From a license perspective, the advanced modeling scenario requires a Monitor and Modeler license. It helps you to:

Define a model representation (Flow Definition) for runtime instances (Flow Instances) available in the target information system

Track the complete lifecycle for Flow Definitions, in terms of creation, update, deletion, activation, defining states and versioning, keeping track of all changes and the history (who changed what, how, when)

Enrich runtime details (from/to which platform, from/to which files) with more business-oriented details

Companies involved

Organizational structure

Items involved

Applications/Users that produce/consume flows

SLA and Time-based rules

Stages involved

Restrict access, through authentication and profiling features, to Flow Definition Governance use cases

Page 120: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 110 EMAOSM022/01

Potentially, produce jobs that can be put into production on every involved node or, better, on a version control system, in order to deploy the flow execution on the middleware and their components (e.g. Scheduler) and have it up and running.

This use case requires development of a custom plug-in for the deploy phase.

Figure 3.27

The roles identified to perform the advanced modeling are those of the Middleware Specialist and Scheduler specialist.

In this scenario, each and every flow running in the infrastructure has a definition in the modeler, every Flow Definition change is tracked, and every deploy operation on the file transfer infrastructure can be originated by the modeler.

In order to use advanced modeling the actions to perform are:

Model and manage every flow, and all its dependent resources:

Companies with which I am integrated through data moving

Physical nodes, Stages (test, production), Items (files, tapes, prints, packages), Applications, Users, Jobs, Logical Area

Deploy to the infrastructure through plug-ins:

Jobs and commands, platform dependent, are generated and deployed to the right nodes involved in a dataflow

In order to program Schedulers or perform middleware configuration operations.

Page 121: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 111

3.5 Adoption Process

An adoption process exists in order to adopt and take advantage of Orchestration Suite product features, depending on the customer requirements, goals, priorities, its maturity model and the value it is interested in adding to its MFT infrastructure, moving incrementally between the four usage scenarios depicted in the previous section:

Basic and quick monitoring of the data moving infrastructure

Discovery of running flows

Advanced monitoring, with cut-off definition and event notification

Modeling and full governance of all the data related to the data moving

All these scenarios may be implemented in a Security-disabled or Security- enabled context.

Figure 3.28

Page 122: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 112 EMAOSM022/01

3.6 How-To

In the following sections several diagrams will be depicted, each showing execution details for every usage scenario introduced in the previous section.

Diagrams, in the form of UML Activity Diagram, depicts Orchestration Suite components involved, the actions they execute, or have to be executed by users, in order accomplish each scenario goals.

3.6.1 Monitoring Flow Middleware Status

Figure 3.29

Page 123: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 113

3.6.2 Discovering Running Flows

Figure 3.30

Page 124: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 114 EMAOSM022/01

3.6.3 Creating Flows using a bottom-up approach

Figure 3.31

Page 125: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 115

3.6.4 Creating Flows using a top-down approach

Figure 3.32

Page 126: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 116 EMAOSM022/01

3.6.5 Monitoring Flow Business Status

Figure 3.33

Page 127: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 117

3.6.6 Flow Governance

Figure 3.34

Page 128: SP Orchestration Suite EE v250 Installation UserG

User Guide

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 118 EMAOSM022/01

3.6.7 Working in a security-enabled context

Figure 3.35

Page 129: SP Orchestration Suite EE v250 Installation UserG

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 119

Glossary

Activity Record

Description: information about a runtime operation executed in one of the monitored nodes of the MFT infrastructure; each activity represents a monitored step. File transfer activities are correlated end-to-end in Flow Instances. Some activity types are not correlated at all (e.g. Spazio cleaner, queue create, queue delete).

Activity Records are collected in Activity Files by the Orchestration Suite Agent or Spazio Agent on every monitored node; Activity Files are sent to the Manager node using a data transfer protocol: FTP or WMQ or Spazio native protocol can be used by the OSAgent, Spazio native protocol by the Spazio Agent.

The OSEngines.ActivityLoader loads each Activity Record contained in an Activity File in the Orchestration Suite database.

The OSEngines.ActivityCorrelationEngine correlate Activities, using MFT/FT specific keys, in order to build and end-to-end view on a flow, and evaluates Flow Instance Middleware and Business end-to-end completeness status.

Constraints:

Duplicated Activities: it may happen that activities are sent twice to the OSManager, and duplicates arise while inserting these activities in the Manager DB, due to an in-trouble situation in the log capturing infrastructure. Both the OSManager Spazio listener and WMQ listener are able to manage duplicates and move them to a DuplicateQ in order to enable troubleshooting.

Relationships:

Monitor.Flow Instance Middleware and Business Monitoring

Modeler.Steps are derived from Activity Records during the Discovery phase, in which Flow Definitions are created bottom-up

Use cases:

OSEngines.ActivityLoader, ActivityEngine

WebUI.Monitor.Activity Monitoring

Cleaner.Delete/Archive/Extract Activities

Page 130: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 120 EMAOSM022/01

Application

Description: representation for an application module that can produce or consume files. Applications can be defined as Sender or Receiver for flows; they have a user defined as being responsible for their availability, and can be deployed on location/environment.

Constraints: none

Relationships:

Users, Flows

Use cases:

Modeler.Create/Manage Application

Modeler.Flow.Create/Edit: Select Sender/Receiver Application

Calendar

Description: enable the right evaluation for flow cut-off over completion according to a calendar definition.

Two calendar types are supported: a proprietary type (Orchestration Suite), and the IBM Tivoli Workload Scheduler type, that can be exported and then immediately imported into the OSManager.

Calendars can be imported into the OSManager through the WebUI.Utility.Calendar use cases.

Several calendar definitions for the same type may exist in the OSManager, with different definitions, names, and expiration date.

Each calendar has a validity period, and supports enable/disable rules for each week day, and some exceptions days that can be explicitly defined: consider samples provided in samples/calendar.

Calendars may be overwritten in their definition.

Calendar validity may be expanded just before its expiration. No need to Validate/Extract associated Flow Definitions in this case.

Warning:

Calendar definitions MUST be kept aligned, between the scheduler copy and its OSManager version; each change to the scheduler version of a calendar that has on impact on modeled flows must cause an update to the OSManager calendar version; should the two version become misaligned, serious inconsistencies may happen in the evaluation of the flow cut-off, like:

Page 131: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 121

cutoffs evaluated when they shouldn't, producing false not-event and causing wrong cutoff evaluation for that flow from that moment on

cutoff not evaluated when they should, producing wrong cutoff and causing wrong cutoff evaluation for that flow from that moment on

Once a calendar expires, cutoff are never more evaluated for that flow

a new definition for that calendar must be provided, and that flow updated to use this new definition.

Constraints:

A calendar in use by at least one flow cannot be deleted.

Once expired, a calendar is not usable anymore, its validity cannot be extended, and all Flow Definitions using it have to be upgraded to use a new Calendar.

Relationships:

A Flow Definition must have a calendar associated to it, in order to support cutoff rules.

Use cases:

Services.Calendar: Import/Export/Manage.

Modeler.Flow.Create/Manage: 8. Define Rules, in order to select the calendar to be used.

OSEngines.RuleLoader: uses calendar definition to create time-based rules that will be evaluated by the RuleEngine.

Limitations:

Calendar editing and update is not supported.

When calendars changes, they may be imported once again, overwriting the previous definition. It is not needed in this case to update each and every active Flow Definition in order to use the new definition.

Calendar versioning is not supported in this version. Should a calendar definition be overwritten, inconsistency could arise in the Monitor UI relatively to already evaluated Flow Instances, which state has been evaluated using the previous calendar definition.

Page 132: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 122 EMAOSM022/01

Company

Description: a representation of an actual company, Customer, Supplier, Partner or division involved in the file transfer infrastructure. Company is one of the business information that can be specified while modeling or discovering a Flow. Company information can be later on used in order to perform business monitoring on flows, in the WebUI.Monitor, or reporting, while analyzing how many errors/data/flows have been sent to/received by a specific company.

Constraints: none.

Relationships: none.

Use cases:

Modeler.Company: Create/Manage Company

Modeler.Location: Add Company

Composite Flow Instance

Description: some MFT operations give raise to more than one file transfer at once; as an example, consider coarse grained IBM WMQ FTE steps like sending a directory content, and recursively for all child directories, or the opportunity to send more than one file at once using regular expression. Another example applies to Spazio File Extender for IBM WebSphere Message Broker, in which for instance a file entering in IBM WMB can be elaborated by a mediation that finally give raise to more than one transformed files, each one with its own final destination.

Each and every file transfer activity instantiated by such a coarse grained operation are monitored end-to-end as Flow Instances, including their completeness and error state.

For end users interested in monitoring the whole Data Flow process, composed by more than one Flow Instance, the Composite Flow summary is available.

Composite Flows do correlate between them operation summary log and Flow Instance summary log in order to give rise to a single comprehensive view on the executed Data Flow.

Composite Flows have a runtime status (complete, running), evaluated using the runtime status of each one of its parts.

Composite Flows have an error status (ok, error), evaluated using the error status of each one of its parts.

Page 133: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 123

Composite Flows are recognized as instances of Flow Definition, through the matching algorithm executed on its Flow Instances, which causes the Flow Definition Name to be propagated at the Composite Flow level.

Composite Flows have a cutoff status, evaluated on the whole Composite, using its start/end timestamp and completion time respectively for duration and completion cutoff evaluation (see section Cut-off over completion and Cut-off over duration).

Composite Flows are secured, in the sense that when Security is active users can only see in the WebUI.Monitor.CompositeFlow, for a specific Composite Flow, only the Flow Instances to which he has access using the usual profiled access rules.

Composite Flow is a runtime monitoring entity; there is no distinction in the Modeler between Flows and Composites. In the Modeler a Flow is created: according to its steps, and to the granularity of those step, Flow Instances are composed in Composite Flows at runtime.

Constraints: none.

Relationships: none.

Use cases:

Monitor.WebUI.CompositeFlow sheet

Correlation - Activities

Description: process that binds together all the different basic activities coming from different nodes and systems, at different times, as activities related to the same flow runtime instance.

This process is performed by the OSEngines.ActivityCorrelationEngine.

Two correlation algorithms are defined: the first one is key-based, and uses unique information that MFT/FT systems generate and guarantee to propagate uniquely and consistently throughout the whole file transfer; for instance, for Spazio products, the following correlation key exists:

Original queue manager name

Original queue name

Original address type

Original user class

Original internal number

Page 134: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 124 EMAOSM022/01

This key, generated by a Spazio MFT/S node when a file is first seen in a mailbox, is guaranteed to be propagated through the whole file lifecycle inside a Spazio MFT/S infrastructure and is then used as a unique correlation key.

A second correlation algorithm is heuristic based, and applies to those MFT/FT products that not propagate unique key (FTP for instance); it uses:

filename involved in the file transfer

Repository/Endpoint involved in the file transfer

Timestamps.

For instance, the correlation between an FTP put and an FTP get activities executed on the same file (destination file for the put, source file for the get) and on the same FTP server/directory (destination FTP server/directory for the put, source FTP server/directory for the get) uses this heuristic.

Constraints: none

Relationships:

Monitor.Flows

Use cases:

Monitor WebUI.Activity Browsing

Monitor WebUI.Flow Instance Browsing

Cleaner.Delete/Archive/Extract Activities

Correlation - Flow Instance/Flow Definition recognition

Description: this is the process that aims to identify the Flow Definition for a Flow Instance, where such a Flow Definition has been modeled/created through a top-down Modeling or a bottom-up Discovery Phases.

This process uses recognition criteria, specified for Flow Definitions, during the identification phase.

If this process succeeds, the Flow Instance is considered as recognized, otherwise it is considered as unrecognized; filters exist on the WebUI.Monitor in order to search these two kinds of flows.

The OSEngines.ActivityCorrelationEngine performs this recognition algorithm.

When a Flow Instance is first created in the database, recognition criteria are immediately evaluated in order to identify the potentially matching (through recognition criteria) Flow Definition.

Page 135: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 125

If it is found, it will be possible to use Flow Definition business properties in order to evaluate Flow Instance cut-off rules, filter notification, and flow completion status.

Business information specified for that flow, like item name, logical area, job name, sender and receiver as users or applications, is indeed used to enrich the flow middleware information captured from the IT infrastructure, enabling business monitoring on the WebUI.Monitor, and the evaluation of "business" oriented reports and dashboards.

For recognized Flow Instances, in the Monitor WebUI: in the Flow sheet the Name column contains the Name and Version of the identified Flow Definition; this Flow Instance is considered in the WebUI.Monitor.Logical Area sheet once the Flow Definition has a Logical Area associated.

Report: if a Flow Instance is identified, this running instance and its status are considered in the Logical Area report and Flow report.

This recognition process applies both to Flow Instances and to Composite Flow Instances, but is always triggered by recognition criteria matched against Flow Instance attributes. Once recognized, a Flow Instance propagates the matching Flow Definition to the Composite Flow it is part of, through the Data Flow Correlation Engine.

Constraints: none

Relationships:

Monitor.WebUI

Use cases:

Cleaner.Delete/Archive/Extract Flow Instances

Report: Logical Area, Flow Definition

Cut-off over Completion

Description: a time-based rule associated to a Flow. Warning and Cut-off time can be specified for cutoff over completion; a calendar definition is mandatory too in order to evaluate cutoff according to the actual flow execution plans (see the Calendar glossary section).

Cut-offs are evaluated both on fine-grained, single Flow Instances, and on coarse grained Composite Flow Instances:

for Flow Instances that do not belong to any Composite Flow

the cut-off over completion algorithm considers the flow end timestamp to evaluate warning and cutoff state

Page 136: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 126 EMAOSM022/01

not-event are evaluated for all those "planned-but-not-yet-run" flows that have been defined to complete before a target time, according to a calendar, but have never run

for Flow Instances that are part of coarse grained Composite Flows, like IBM WMQ FTE flows

the whole Composite Flow end timestamp is used by the cut-off over completion algorithm to evaluate warning and cutoff state, that is, the higher timestamp between all the file transfer end timestamp, and the send operation end timestamp

warning and cut-off status is evaluated at the Composite Flow level, resulting in a Composite and all its instances having the same cut-off state, evaluated accordingly to the Composite Flow end timestamp.

Constraints:

Applies only to Recognized Flow Instances.

In this release only flows that are planned to be run once a day (according to a calendar) supports this kind of cut-off Should a flow have a recurrence of two or more run in the same day, this feature is not supported and problems arise in evaluation cut-off states (see section Synchronization and Out-of-sync Management).

Relationships: none.

Use cases:

Modeler. Create/Edit Flow: Define a cut-off

OSEngines.RuleLoader: in order to produce the daily instances of time-based rules to be evaluated

OSEngines.RuleEngine: in order to evaluate time-based status for each Flow Instance

Monitor.WebUI: in order to browse Flow Instance time-based status on the online DB

OSEngines.NotificationEngine: in order to notify time-based status on a supported notification provider

Reporting: in order to evaluate grouped time-based status statistics.

Cut-off over Duration

Description: a time-based rule associated to a Flow. Relative cut-off, on Flow Instance duration, is supported in this version. Only Cut-off Duration time can be specified for this cut-off type; warning time is not supported on duration.

This cut-off type does not use a calendar definition.

Page 137: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 127

This cut-off type can be used even in unplanned situation, in which a variable and unpredictable number of Flow Instances are run daily, since it is evaluated not on an absolute time like the completion cut-off, but on a relative time, and hence evaluated as soon as a new Flow Instance is monitored in the Manager.

Constraints:

Applies only to Recognized Flow Instances

Relationships: none.

Use cases:

Modeler. Create/Edit Flow: Define a cut-off

OSEngines.RuleEngine: in order to evaluate duration-based status for each Flow Instance

Monitor.WebUI: in order to browse Flow Instance duration-based status on the online DB

OSEngines.NotificationEngine: in order to notify duration-based status on a supported notification provider

Reporting: in order to evaluate grouped time-based status statistics.

Discovery

Human-based phase aiming to analyze all the evaluated Flow Instances in order to identify flow runtime occurrences.

It usually happens in real MFT infrastructures that a file containing the same kind of data, e.g. blacklisted credit cards, is sent on a regular basis (daily, daily in working days, weekly, monthly), that some form of naming convention exists, based for instance on the file name (first 4 characters are the flow name), or maybe on the directory name (all files with blacklisted credit card numbers are in directory C:\prodstage\ongoing\blkcc), or correlationId for more sophisticated MFT products, and that each day an information suffix, for example, to differentiate each actual file of that flow.

Several attributes can then be used the discovery phase in order to identify all occurrences of the same flow, and to discover each used pattern, such as source file name, destination file name, source or destination and others.

A filter mechanism can be exploited during the occurrence identification phase, in order to restrict analyzed rows; a grouping mechanism is available on those rows and, acting on columns and rule evaluation, helps in finding the effective key that will be used later on to consider a group of Flow Instances as occurrences for a flow.

Page 138: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 128 EMAOSM022/01

This logical key, named recognition criteria, can be used for:

Bottom-up Flow Definition creation: all the occurrences of a flow can be used to give raise to their Flow Definition. This phase is performed through the Manage Criteria + Analyze use case. Flow instance occurrence attributes will be used when creating the Flow Definition.

Consolidation: it will applied against every new Flow Instance entered in the system after the discovery human phase has been performed, in order to try to identify automatically new occurrences for this and every defined criteria; this phase is performed trough the Apply Criteria use case.

Runtime FlowInstance/Flow Definition matching phase (see Correlation - FlowInstance/Flow Definition recognition).

This Discovery phase is available for fine-grained Flow Instances, and for coarse grained Composite Flow instances too. For Composite Flows, the Discovery phase starts again from Flow Instances, and a use case is available in order to identify all instances belonging to the same Composite Flow Instance before an appropriate criteria is created.

Constraints:

Discovery for Composite Flows including SPFE steps is not supported in this release.

Relationships:

config/middlewareLoader.properties: in order to define default entities like company, location and environment that are used while discovering flows and creating Flow Definitions. These default values can be changed in the WebUI during the discovery phase.

Use cases:

Services.Flow Discovery.Create/Apply/Analyze Criteria

Discovery - Consolidation Phase

Description: in a discovery phase, at one point in time, monitored Flow Instances are analyzed in order to identify recurrences for the same flow. All identified recurrences give rise to recognition criteria, which will be analyzed in order to create a Flow Definition, and used at runtime in order to identify any other Flow Instance that will run later on.

This process can be performed inside or outside of the tool. The Orchestration Suite Flow Discovery use case helps in achieving this goal.

Page 139: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 129

The Consolidation process helps in understanding if the whole Discovery phase is completed, that is, if every flow running in the file transfer infrastructure has been identified.

It tries to apply static recognition criteria to the whole database, in order to try to match every Flow Instance against its static criteria. At the end of the process, every Flow Instance is bound to its static recognition criteria, or it is a new recurrence, and new recognition criteria must be created for it.

This process helps to guarantee that every file transfer running in the infrastructure has been identified.

Constraints:

Consolidation process is mandatory in an Advanced Modeling adoption scenario, when every flow running in the infrastructure must be identified, defined, and governance must be introduced for its lifecycle.

Relationships: none.

Use cases:

Services.Flow Discovery.Apply Criteria

Environment

Description: an entity used in order to model runtime environments. In enterprise MFT infrastructures stages/environments exists for testing purposes (e.g. Production, Integration Test, System Test), or for business purposes (e.g. Compliance Env, Backup, Storage, Brokering Env), and files moves between environments. . All customer environments can be represented inside the Modeler, and then, later on, this business information can be used for instance for Flow Business Monitoring and for Reporting reasons.

Constraints: none

Relationships:

Each Company has one or more environments

Each Location is involved in one or more Environments

Each Item and Application is usually deployed on a Company/Location/Environment

Flow Steps are deployed on Company/Location/Environments

SPAZIO QMgr/Queues are deployed on a Company/Location/Environment

Page 140: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 130 EMAOSM022/01

Use cases:

Modeler.Topology: Create/Manage Environment

Modeler.Location: Bind Environment

Modeler.Item and Modeler.Application: Deploy

Modeler.Flow: Deploy Step

Flow Definition and Flow Governance

Description: sequence of Steps executed inside the file transfer infrastructure on Items, for modeling data-moving scenarios of the target infrastructural domain.

Flow definitions can be created either in:

Advanced Monitoring scenario: flows for which a cut-off rule or a notification filter have to be defined.

Advanced Modeling scenario: each and every Flow Instance running in the file transfer infrastructure has a corresponding Flow Definition in the Modeler, in order to have a complete governance process.

Flow definitions can be created either through:

Modeling: in a top-down approach, from scratch

Discovery: in a bottom-up approach, analyzing running instances.

A Flow Definition has a Governance Status and a version associated to it, information that changes according to the modeling operations performed on it.

Basic governance operations on Flow Definitions:

Creation, edit: the system keeps track of the user that performed the update on the flow.

Flow versioning; each flow definition version is maintained in the Modeler, in order to check for its lifecycle.

Delete: a flow in New status is physically deleted; a flow in Active status is logically deleted; it is maintained in the Modeler, and can be used to create a new one through a Duplicate use case.

View all the information associated to a flow.

Validation: triggered by an explicit action, performs basic consistency checks for flows and their related information.

Page 141: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 131

Duplication: an existing flow can be duplicated, defining a new name for this copy and resetting the version number; this action is mainly used to define a generic flow template and then create copies of it, completing missing information as necessary. A more sophisticated duplicate action is also available, called advanced duplicate implementing a Copy/Paste special pattern, involving attribute inheritance and specialization from the main Flow Definition to the advanced duplicated ones.

Deploy of a flow: allows the configuration of operations that act on the infrastructure and that represent the execution of a flow; by default, this is an empty phase.

Definition of Senders and Recipients. These can be Users or Applications.

Definition of time based rules: warning and cut-off.

Definition of a priority.

At Flow Definition level no distinction is made between Flows and Composite Flows. Some Flow Definitions are intended to be monitored at runtime as Composite Flows due to the granularity of their step. As an example, the IBM WMQ FTE product does support coarse grained step, like sending a directory or an FileGroup, and for this reason it will be monitored even as Composite Flow.

Constraints: none.

Relationships: none.

Use cases:

Modeler: Create Flow, Manage Flow , for top-down definition, or flow update

Services:FlowDiscovery: Analyze criteria, for bottom-up definition.

Flow Definition: validation and extraction

Description: a process that verifies a flow against a customizable algorithm (through a validation plug-in) and after successful completion moves the flow into the Validated status.

Flow extraction is performed by selecting a flow or a group of flows in the Modeler.Manage Flow use case and starting the extract action. It triggers the optional extraction plug-in and changes the status of the flow to Active.

In default installations, Validation and Extraction phases do not perform any action.

Page 142: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 132 EMAOSM022/01

Constraints:

Validation and Extraction, even when empty, are needed in order to activate Flow Definition and enable their matching with Flow Instances.

Relationships: none

Use cases:

Modeler.Manage Flow, selects one or more Flows and validates them. Using the same use case you can also extract them after validation.

Flow Definition - Step

Description: flows are composed by steps, representations of execution activities performed in the MFT/FT infrastructure.

Steps are grouped in:

Data Moving steps, like "FTP put", "e-mail send", "MQFTE send".

Mediation steps, that is, step performed by mediation brokers, like IBM WMQ or RDJ.

Utilities, that is, step specific for the deploy domain that can be used while modeling Data Flows.

Steps can be defined by the end user in the Modeling phase.

Steps are evaluated by the OSManager during the Discovery phase of a Flow.

For each predefined Step, it is possible to specify the attributes of the corresponding operation; for instance:

Source QMgr, Source Queue, Sender, CorrelationId for Spazio Steps.

Source and Destination Agent and directory, bin or data for IBM WMQ FTE steps.

FTP target Server, bin or data, user and password node for FTP steps.

Constraints:

Some types of steps managed by the OSManager are not available during the Modeling phase, like Thema related Steps. Nevertheless, these steps are available during the Monitoring and Discovery phases, enabling Protocol update phases.

Relationships:

A flow is implemented through steps

Steps can be associated between themselves to define a graph based on parent/children relationship.

Page 143: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 133

Steps are associated to one or more Location/Environments.

Steps act on Items, sending or receiving them.

Use cases:

Modeler.Create/Edit Flow: 3. DefineStep.

Flow Instance

Description: a Flow Instance is the uniquely identifiable representation of the runtime execution of a flow.

Constraints:

A Flow Instance has a unique id that makes it unique inside the OSManager, and let the final user distinguish one instance from any other. This unique Id is generated incrementally by the OSManager.

Relationships:

A Flow Instance has a Source and a Destination, in terms of QueueManager, Queue, FileName, Date/Time.

A Flow Instance has zero (ghost Flow Instance), one or more Activities associated to it.

A Flow Instance may have different status:

Completion Status: running, complete

Middleware Status: error, ok

Time-based Status: not-yet-evaluated, on-schedule, warning, cutoff

A Flow Instance may have a recognized Flow Definition associated to it, through a Dynamic Recognition Criterion.

A Flow Instance has Governance notes, status, owner, and their history, associated to it.

Use cases:

Monitor.WebUI.Flow Sheet, in order to monitor Flow Instance status

OSEngines.ActivityLoader, in order to create Flow Instances, correlate activities to it

OSEngines.ActivityEngine, in order to evaluate Flow Instance status

OSEngines.NotificationEngine, in order to publish Flow Instance status change

Report, in order to have some basic statistical information about Flow Instances that have run in the infrastructure, and their status

Page 144: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 134 EMAOSM022/01

Monitor.WebUI.LogicalArea and QMgr/Queue Sheet, in order to have statistical information about Flow Instances that run in the infrastructure, and their status

Cleaner, in order to delete/archive/export Flow Instances and their activities

OSEngines.FlowProfiler: in order to evaluate which users can monitor this Flow Instance, depending on their security role, and depending on their business role (sender, receiver, bound to the Logical Area to which this Flow Instance belongs).

Flow Instance - Ghost

Description: fictitious Flow Instance created when a time rule violation occurs for a modeled flow and no activities for that flow have ever entered the processing queue. It is a Flow Instance in warning or cut-off status, with no activities correlated to it.

Constraints: none.

Relationships: none.

Use cases:

Monitor.WebUI.Flow sheet

OSEngines.RuleEngine: produces the ghost Flow Instance when a warning time has expired and no activities have ever been correlated/summarized for that Flow Definition.

Generic properties

Description: pairs of name and value keys, fully customizable, used for adapting Orchestration Suite to the target infrastructural domain. It is possible to define generic properties using command lines. These values can be bound to several data types in the Modeler WebUI, or used in the custom plug-in developed to customize several governance phases.

Constraints: none

Relationships:

Users can have additional properties

Locations

Page 145: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 135

Use cases:

sposcsh.helpDetailKeyManagement()

Heartbeat

Description: It is a message sent to Orchestration Suite server node as Activity Data in order to signal to OrchSuite that the agent is up and running and to communicate the state of the agent (enabled/disabled).

Constraints: none.

Relationships:

OSAgent, Spazio Agent

Use cases:

WebUI->Troubleshooting->ActivityData

Item

Description: representation for data assets managed in the MFT infrastructure. An item is any data content, like print, files, CD, VD, messages, packages, that is part of the processes executed in the infrastructure, and for whichever governance and lifecycle management services are needed.

An Item, considered as a logical entity, can be physically represented by:

A File, that is, the most common data asset on which MFT products work on

A FileGroup, denoting a group of files sent recurrently using regular expression

A Directory, denoting a directory which is sent at once, recursively, and jointly with its content.

As part of the governance process, an Item is usually deployed on more than one location/environment, and it has specific physical details according to the location type on which it is deployed.

Constraints:

Once defined, the type for an Item cannot be changed; a FileGroup cannot become a Directory

An Item of type FileGroup has always the same physical properties, regardless of the platform on which it is deployed

Page 146: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 136 EMAOSM022/01

An Item of type Directory cannot be deployed on a location of type MAINFRAME

Item of type FileGroup and Directory can only be specified for IBM WMQ FTE steps.

Relationships:

A flow is based on items: flow steps act on items.

An item can be deployed on more than one location/environment; on a location/environment, more than one item can be deployed on the same location/environment.

Items of type FileGroup and Directory and created, during the Discovery phase, for IBM WMQ FTE Composite Flows, according to the recognition criteria used for that Composite.

A criteria like start with, ends with, or contains specified on the source file name is assumed to be related to a regular expression usage while sending files, and is therefore associated to a FileGroup Item type during the Discovery of that Composite Flow.

A criterion specified on a Repository/Endpoint couple, is assumed to be related to a directory usage while sending files, and is therefore associated to a Directory Item type during the Discovery of that Composite Flow.

Use cases:

Modeler.Item.Create/Manage

Modeler.Flow.Create/Manage: 5. Select Item for steps

Services.Flow Discovery.Analyze Criteria: Items are derived, using Flow Instance file-based information, while creating a flow using a bottom-up method.

Job

Description: steps of a Flow can be executed inside a job, which is usually scheduled, that is, invoked by a Scheduler tool; in the Modeler, it is possible to specify the job in which each flow step is executed; a Job can be a JCL script in Mainframe environments, a shell script in Unix environments, or a batch (.bat) script in Windows environments ; it is executed in a Location/Environment, and its name and extension are used during the Flow Extract phase, should a deploy plug-in be used to create jobs to be actually deployed in the MFT infrastructure.

Page 147: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 137

Some MFT subsystems produce job information in their log; these job values are shown and can be filtered in the Middleware Section of the Monitor, while jobs defined in the Modeler are showed and filtered in the Business Section of the Monitor.

Job information is actually not captured during the Discovery phase; it is hence one of the business information that can be defined for a Flow, during the Modeling or Discovery phase, and that can later on be used in business monitoring and for reporting reasons.

Constraints:

All the steps associated to the same job must be deployed on the same location/environment.

Relationships:

A job can contain more than one step.

Use cases:

Modeler.ManageFlow.Search Flow by job

Monitor.MiddlewareFilter.SearchFlow by middleware job

Monitor.BusinessFilter.SearchFlow by business job.

Location

Description: a place where flows are executed, where data moving operations are carried out. It may be a server or, in broader terms, it may be attributed to a branch office.

On each location are defined environments, and over the location/environment binding are defined repositories, like FTP Servers and clients, IBM WMQ FTE Agents, Spazio Queue Managers.

Location information is one of the business information that can be defined for a Flow, during the Modeling or Discovery phase, and that can later on be used in business monitoring and for reporting reasons.

Constraints: none.

Relationships:

More than one environment may exist on the same location.

Page 148: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 138 EMAOSM022/01

Use cases:

Modeler.Topology.Create/Manage Location

Modeler.Flow.Create/Edit Flow.4. Deploy on Topology

Modeler.Item.Create/Edit.2. Deploy Item

Modeler.Application.Create/Edit.3. Deploy Application.

Logical Area

Description: Classifier, grouping mechanism; is used to partition the entire flow space according to organizational needs, classification and responsibilities. Based on a hierarchical mechanism, customizable according to the enterprise deployment environment.

Can be used for Business Monitoring, LA Monitoring, Security or Reporting reasons.

In WebUI.Monitor.LogicalArea, a summary view is evaluated on Flow Instances, counting totals, errors, running and cut off Flows in a unique and quite rich view.

When Security is active, access to monitoring Flows can be profiled according to users declared as associated to Logical Areas: only users associated to a specific LA having "restricted reader" role can monitor Flow Instances recognized against Flow Definition that are part of that LA.

Logical Area information is one of the business information that can be defined for a Flow, during the Modeling or Discovery phase, and that can later on be used in business monitoring and for reporting reasons.

Constraints: none.

Relationships:

Users are bound to one or more LogicalAreas

Flows can have a LogicalArea associated to them.

Use cases:

Modeler.Users.

Modeler.Flow: 1.Insert Flow Data, a Logical Area can be optionally specified.

Monitor.WebUI.Flow sheet: when security is on, users can have a partial view on monitoring data according to their role.

Monitor. WebUI.Logical Area sheet on online DB: summary of flow status grouped by Logical Area.

Page 149: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 139

Report.Logical Area: the same summary available from the command line.

OSEngines.FlowProfiler: evaluates Flow Instance/user bindings, for security and data access profiling reasons.

Logical Area Type

Description: Basic Grouping mechanism for Logical Areas.

Constraints: none.

Relationships: none.

Use cases: none.

Payload

Description: Represents the information related to the event being notified in XML format or in key-value format if a formatter has been applied.

Use cases: WebUI->Configuration->

Persistent DB

Description: data managed according to various means and purposes by Orchestration Suite are stored inside a single DB schema, which can be logically partitioned into modeler, time-based rules, flow discovery and monitor data. Data coming from the business environment, consisting mostly of activity logs, are continuously accumulated inside the repository thus making it necessary to manage obsolescence of the data mainly for performance reasons.

A correct data management policy follows the data lifecycle, taking care to maintain in the live tables of the repository (that is the most frequently accessed and containing most recent data) only items according to the needs: data which are no longer necessary for monitoring purposes should be moved to the history repository by using the provided command line utilities (DBBrowser) and accessed using the Monitor:History use case through the Web User Interface (on the other hand, current data are managed using Monitor:On-line). After a reasonable period of time, which may vary according to customer business needs, archived data can be exported from the historical repository and saved in a secure manner using external methods.

Refer to Data Management, Chapter 1 for additional details.

Page 150: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 140 EMAOSM022/01

Use cases:

Monitor:OnLine Browsing, History Browsing

Cleaner.

Recognition Criteria

Description: Recognition Criteria are used in order to identify Flow Instance recurrences. Recognition Criteria, both static and dynamic, have the same format:

Set of fields: CorrelationId; SourceFileName; DestinationFileName; Source QueueManager; Source Queue; Destination QueueManager; Destination Queue; Description.

Set of basic rules are supported:

exact matching

match on first n characters

match on last n characters

contains a specific set of characters in the position between x and y.

Static Recognition criteria are used during the Discovery phase.

Dynamic Recognition criteria are used during the Runtime Correlation phase.

Static and Dynamic Criteria are usually equal. They may differ when, for file transfer infrastructure refactoring reasons (normalization, standardization, cleaning) Flow Instances are discovered using one criterion (static criterion) and are re-deployed and later on recognized using a different criterion (dynamic criterion).

Constraints:

Recognition criteria depend heavily on the properties and naming conventions existing in the analyzed file transfer environment. Recognition Criteria, if created without due care, may then result in wrong/ambiguous recognitions.

There is no automatic way to check whether two defined criteria conflicts between them: they could be defined on two different columns, and the conflict arises only at runtime when the actual instance with all their values are summarized.

Relationships:

A Flow Definition has one static and one dynamic recognition criterion

Recognized Flow Instances are bound to the corresponding Flow Definition through a static and a dynamic recognition criterion

A Flow Definition is bound, through recognition criteria, to a set of Flow Instances, runtime recurrences for that definition

Page 151: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 141

When static and dynamic recognition criteria differ, a Flow Definition may be bound to two different subsets of Flow Instance recurrences.

Use cases:

Modeler.Flow.Create/Manage: 9. Recognition Criteria: in order to create/update flow criteria.

Services.Flow Discovery.Create/Manage/Apply Criteria: for criteria discovery, editing, analysis in order to create the corresponding Flow Definition, and applied, for consolidation purposes.

OSEngines.ActivityCorrelationEngine: uses existing dynamic criteria during correlation in order to identify a Flow Instance and bind it to the corresponding Flow Definition.

Ambiguous Recognition Criteria: set extended recognition in config/services.properties file, in flowRecognition.checkAmbiguousCriteria property, in order to have ambiguous recognition displayed during monitoring in WebUI.Monitor.

Recognition Criteria - Dynamic

Description: dynamic recognition criteria help the Flow Instance Correlation phase, in order to identify a Flow Definition for a Flow Instance.

These are implicitly created during the Flow Bottom-up creation phase, or explicitly created during the Flow Top-Down creation phase.

Flow Instances can match more than one Recognition Criteria; this can happen when Criteria does not define a partition on the Flow Definition set; as an example: two criteria are defined:

one criteria on source repository, meaning that all Flow Instances that start from Rep1 are recurrences of Flow Definition Checks

one criterion on source filename, meaning that all Flow Instances that have Orders in the source filename are recurrences of Flow Definition Orders.

You must make sure that Flow Instances with filename Orders do not start from Rep1, otherwise those Flow Instances will match both the defined criteria.

Ambiguous Recognition Criteria can be caught.

Constraints:

They exist only when a Flow Definition exists. They follow the lifecycle of a Flow Definition and are then valid only when a flow is in active status

In order to have Flow Definition/Flow Instance matching, dynamic recognition criteria must be defined for a Flow Definition when it is moved to the Active status.

Page 152: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 142 EMAOSM022/01

Relationships: none.

Use cases: none.

Recognition Criteria - Static

Description: static recognition criteria help in the Discovery phase, in order to check whether every Flow Instance recurrence has been identified, and to check that Flow Instances not yet identified does not exist.

Can be created either:

during the Discovery phase, through the analysis of Flow Instance recurrences; in this case, the corresponding dynamic criterion does not exist until this static criterion is analyzed and a Flow Definition is created in the Modeler

during the Flow Top-Down creation phase: in this case, static and dynamic criteria are equal.

Constraints:

They are created before Flow Definitions, in a discovery phase

They can exist even if the corresponding dynamic criterion does not, in a discovery phase

When bound to a Flow Definition, they cannot be deleted

If the Flow Definition is in New status, delete the flow before deleting the static criterion

If the Flow Definition is in Active status, deleting it will cause it to be moved to the Deleted status. The bound static criterion is kept, for traceability reasons, and a new static criterion can now be created, even if identical to the previous ones

When the corresponding Flow Definition version changes, the static criterion and all the original bindings are kept, for traceability reasons.

Relationships: none.

Use cases: none.

Page 153: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 143

Summary – Flow Business Status Evaluation

Description: process that evaluates the runtime completion status for a Flow Instance that have been recognized as an instance of a specific Flow Definition. In this case the completion algorithm takes care of the step composition specified for that Flow definition: a Flow Instance that has been recognized against a Flow Definition is considered complete as soon as the last step specified in the Flow Definition is evaluated as completed.

Constraints:

This algorithm applies only to Flow Instances that have been Recognized.

Relationships:

This algorithm is performed by the OSEngines.ActivityCorrelationEngine.

Use cases:

Modeler.Create/Edit Flow: 3. Define Step

Services.FlowDiscovery: Analyze Criteria

Monitor.WebUI: Flow Instance status is browsable

Report: Flow Instance status is included in the basic report feature

OSEngines.NotificationEngine: Flow Instance status can be notified

Modeler.Create/Edit flow: flow notification filters can be defined.

Summary – Flow Instance Middleware Status Evaluation

Description: process that evaluates the runtime completeness status (complete, still running) and the middleware status (in error, ok) for a Flow Instance from a middleware perspective.

Flow instance middleware status is evaluated using the information that comes from the activity records belonging to that instance.

Each flow execution status is evaluated according to the properties of the monitored MFT/FT product; some example follows:

An FTP flow caused by and FTP put step is evaluated complete as soon as the files arrives on the target FTP server.

A Spazio multihop file transfer is evaluated complete as soon as the file arrives in the target local queue at the end of its transmission path.

A WMQFTE single file transfer is considered complete as soon as it arrives on the target FTE Agent; should this file transfer be part of a coarse grained operation (like sending a directory or using regular expressions) a coarse grained completion algorithm is needed.

Page 154: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 144 EMAOSM022/01

Constraints:

This algorithm applies to unrecognized Flow Instances; for modeled/recognized Flow Instances see Summary - Business Flow Status.

Relationships:

This algorithm is performed by the OSEngines.ActivityCorrelationEngine.

Use cases:

Monitor.WebUI: Flow Instance status is browsable

Report: Flow Instance status is included in the basic report feature

OSEngines.NotificationEngine: Flow Instance status can be notified.

User

Description: entity representing an actual user of the file transfer infrastructure.

User information is actually not captured during the Discovery phase; it is hence one of the business information that can be defined for a Flow, during the Modeling or Discovery phase, and that can later on be used in business monitoring, for Security and for reporting reasons

Constraints: none.

Relationships:

Users have roles, userId and password, used when Security is enabled

Users can be associated as senders or receivers to flows

Users can be defined as responsible for applications

Users are bound to Logical Areas.

Use cases:

Modeler.User Create/Edit/Delete

Modeler.Flow Create/Edit – Define User as a Sender/Receiver

Whole WebUI – When security is on, use case profiling is active, based on logged users and their role

Monitor WebUI – When security is on, Flow Instance data access profiling is active, based on logged users, their role, and their relationship with the running flow (Are Senders? Or Receivers? Or Are bound to the Logical Area defined for the matching Flow Definition?).

Page 155: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 145

Page 156: SP Orchestration Suite EE v250 Installation UserG

Glossary

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 146 EMAOSM022/01

Page 157: SP Orchestration Suite EE v250 Installation UserG

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 147

Appendix A Orchestration Suite Software Prerequisites

Page 158: SP Orchestration Suite EE v250 Installation UserG

Orchestration Suite Software Prerequisites

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 148 EMAOSM022/01

Page 159: SP Orchestration Suite EE v250 Installation UserG

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 149

Appendix B Regular expressions for configuring automation

The following table contains the scheduling expressions that can be specified

Special characters

* ("all values") - used to select all values within a field. For example, "*" in the minute field means "every minute".

? ("no specific value") - useful when you need to specify something in one of the two fields in which the character is allowed, but not the other. For example, if I want my trigger to fire on a particular day of the month (say, the 10th), but don't care what day of the week that happens to be, I would put "10" in the day-of-month field, and "?" in the day-of-week field. See the examples below for clarification.

- used to specify ranges. For example, "10-12" in the hour field means "the hours 10, 11 and 12".

, used to specify additional values. For example, "MON,WED,FRI" in the day-of-week field means "the days Monday, Wednesday, and Friday".

/ used to specify increments. For example, "0/15" in the seconds field means "the seconds 0, 15, 30, and 45". And "5/15" in the seconds field means "the seconds 5, 20, 35, and 50". You can also specify '/' after the '' character - in this case '' is equivalent to having '0' before the '/'. '1/3' in the day-of-month field means "fire every 3 days starting on the first day of the month".

L ("last") has different meaning in each of the two fields in which it is allowed. For example, the value "L" in the day-of-month field means "the last day of the month" - day 31 for January, day 28 for February on non-leap years. If used in the day-of-week field by itself, it simply means "7" or "SAT". But if used in the day-of-week field after another value, it means "the last xxx day of the month" - for example "6L" means "the last Friday of the month". When using the "L" option, it is important not to specify lists, or ranges of values, as you'll get confusing results.

Page 160: SP Orchestration Suite EE v250 Installation UserG

Regular expressions for configuring automation

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 150 EMAOSM022/01

W ("weekday") used to specify the weekday (Monday-Friday) nearest the given day. As an example, if you were to specify "15W" as the value for the day-of-month field, the meaning is: "the nearest weekday to the 15th of the month". So if the 15th is a Saturday, the trigger will fire on Friday the 14th. If the 15th is a Sunday, the trigger will fire on Monday the 16th. If the 15th is a Tuesday, then it will fire on Tuesday the 15th. However if you specify "1W" as the value for day-of-month, and the 1st is a Saturday, the trigger will fire on Monday the 3rd, as it will not jump over the boundary of a month's days. The "W" character can only be specified when the day-of-month is a single day, not a range or list of days.

Page 161: SP Orchestration Suite EE v250 Installation UserG

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 151

Appendix C DB2 Monitoring in Orchsuite

The objective of DB Monitoring is to:

Collect information on the configuration of the system and the database, on the status of the system and the database, and on performance, to be sent if required to Primeur Support. The above information should have historical depth: in other words, even if sent only when necessary (for example, for investigating a performance problem), the information sent must be able to describe not only the situation at the moment of the problem, but also during previous periods allowing the behavior of the system to be correlated, as far as possible, with its status and its configuration.

The area of monitoring includes:

the configuration of the DB2 Instance

the configuration of the Orchestration Database

the status of the objects (tables, indices, buffer pools, etc) used by Orchestration

statistics on the execution of the utilities (mainly Runstats and Reorg)

runtime statistics at the Database and SQL query level.

The solution records history of the data collected in relational tables.

The solution implemented includes:

1 A scheduler that activates the required functions at a specified time.

2 A script that retrieves and stores the relevant monitoring information in a set of predefined tables.

3 A script that deletes the old data whenever new data are collected.

The historical data accumulated can be subsequently exported using specific tools and sent to Primeur Support.

In order for the information collected to be adequately detailed, a number of Switches must be set at the DB2 instance level. Since their setting concerns all the databases managed by that instance, it was decided to opt for a more limited setting when the instance is shared by more than one database, and than to opt for the activation of some additional Switches only when the instance is dedicated to Orchestration Suite.

It is up to the user to decide whether to install the Orchestration DB in a dedicated or shared environment. During installation, in the file <product_home>/bin/installationParams(.bat/.sh).

Page 162: SP Orchestration Suite EE v250 Installation UserG

DB2 Monitoring in Orchsuite

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 152 EMAOSM022/01

The following variable SP_OS_RDBMS_DB2_SHARED_INSTANCE=YES/NO must be configured.

The critical data on the current status are obtained from the DB2 monitor tables and are summarized in tables of the Orchestration DB.

Page 163: SP Orchestration Suite EE v250 Installation UserG

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 153

Appendix D Notification Topic List

The following is a list of the defined topics.

Modeler:

Topic_ModelerFlowActivate

Topic_ModelerFlowDelete

Topic_ModelerFlowUpdate

Topic_ModelerUserCreate

Topic_ModelerUserDelete

Topic_ModelerUserUpdate

Monitor:

Topic_MonitorActivityInsertedFt.Legacy

Topic_MonitorFlowComplete monitor, flow, complete

Topic_MonitorFlowCutoffCompletion

Topic_MonitorFlowCutoffDuration

Topic_MonitorFlowError

Topic_MonitorFlowOnTimeAfterExpiredCompletion

Topic_MonitorFlowOnTimeAfterExpiredDuration

Topic_MonitorFlowOnTimeCompletion

Topic_MonitorFlowOnTimeDuration

Topic_MonitorFlowRecognitionAmbiguous

Topic_MonitorFlowRunning

Topic_MonitorFlowWarning

Topic_MonitorOperationComplete

Topic_MonitorOperationError

Topic_MonitorOperationRunning

Report:

Topic_ReportGeneration

Page 164: SP Orchestration Suite EE v250 Installation UserG

Notification Topic List

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 154 EMAOSM022/01

Page 165: SP Orchestration Suite EE v250 Installation UserG

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 155

Appendix E Step types

The following step types have been introduced and are used in each product component:

Step Type Notes

Spazio Dispatch Usual Spazio MFT/S Dispatch operation

Spazio Acquirer Usual Spazio MFT/S Acquire operation

Spazio Tx Pr4 Used to model flows entering myCompany, or exiting from myCompany, in order to have them correctly evaluated

Spazio Client Send Used for send operation executed through Spazio Clients

Spazio Client Receive Used for receive operation executed through Spazio Clients

FTP Put Used to represent FTP Put operations, to FTP Servers or a Spazio Mailbox; used for Spazio FTP transports moving files to FTP Servers

FTP Get Used to represent FTP Get operations from FTP Servers or from Spazio Mailbox; used for Spazio FTP transports pulling files from FTP Server inside Spazio Mailbox

FTPS Put Like FTP Put, but for FTP/S protocol

FTPS Get Like FTP Get, but for FTP/S protocol

SFTP Put Like FTP Put, but for SFTP protocol

SFTP Get Like SFTP Get, but for SFTP protocol

HTTP upload Used for send operations executed to a Spazio Mailbox using HTTP protocol

HTTP download Used for receive operations executed from a Spazio Mailbox using HTTP protocol

HTTP/S upload Like http upload, but using HTTP/S protocol

HTTP/S download Like http download, but using HTTP/S protocol

E-mail Send Used when sending files with attachments to a Spazio MFT/S, or when Spazio MFT/S sends a file in attach using SMTP to a target e-mail address

MQFTE Send Used to represent IBM WMQ FTE send operations

Page 166: SP Orchestration Suite EE v250 Installation UserG

Step types

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 156 EMAOSM022/01

Step Type Notes

C:D copy to Used for C:D copy to operations

C:D copy from Used to represent C:D copy to operations that implement a receive pattern, when Primary and Secondary nodes are inverted

Tx C:D Used while discovering C:D flows, when the actual operation executed is unknown

XFB/CFT send Used for XFB/CFT send operations

XFB/CFT receive Used for XFB/CFT receive operations

NetView FTP Send Used for NetView FTP send operation

NetView FTP Receive Used for NetView FTP receive operation

Tx NetView FTP Used while discovering NetView flows, when the actual operation executed is unknown

XCOM send Used for XCOM send operation

XCOM receive Used for XCOM receive operation

Tx XCOM Used while discovering XCOM flows, when the actual operation executed is unknown.

Page 167: SP Orchestration Suite EE v250 Installation UserG

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 157

Appendix F Report Table

The report table can be freely used by customers in order to create their own reports using the standard reporting tool adopted.

The table schema included below, named REPORT_FLOW_INSTANCE, can be freely used.

Its definition can be found in file 1.Monitor_table.sql, under the db\install product installation path.

Page 168: SP Orchestration Suite EE v250 Installation UserG

Report Table

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 158 EMAOSM022/01

Table 1 – Flow Notification Record Format

Page 169: SP Orchestration Suite EE v250 Installation UserG

Report Table

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide EMAOSM022/01 159

Table 2 – Cutoff Status Values

Page 170: SP Orchestration Suite EE v250 Installation UserG

Report Table

Orchestration Suite ver. 2.5: Modeler and Monitor: User, Installation and Administration Guide 160 EMAOSM022/01