taking the secure migration path to it virtualization › devpages › spera › zjaugu08.pdf ·...

84
WWW.ZJOURNAL.COM AUGUST/SEPTEMBER 2008 THE RESOURCE FOR USERS OF IBM MAINFRAME SYSTEMS INSIDE FROM THE PUBLISHER OF Taking the Secure Migration Path to IT Virtualization Making Business Sense of Your Network Traffic | How to Leverage Data Imaging for DB2 Tables Extending z/OS With Linux: A Multi-Protocol File Exchange Gateway | Workload Manager: Common Myths

Upload: others

Post on 29-Jun-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

W W W . Z J O U R N A L . C O M A U g U s t / s e p t e M b e R 2 0 0 8

t h e R e s O U R C e f O R U s e R s O f I b M M A I N f R A M e s y s t e M s

INS IDE

f R O M t h e p U b L I s h e R O f

Taking the Secure Migration Path to IT Virtualization

Making Business Sense of Your Network Traffic | How to Leverage Data Imaging for DB2 Tables

Extending z/OS With Linux: A Multi-Protocol File Exchange Gateway | Workload Manager: Common Myths

Page 2: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Copyright © 2008 illustro Systems International, LLC. All Rights Reserved.All trademarks referenced herein are trademarks of their original companies.

toll-free U.S. and Canada: 866.4.illustro (866.445.5878) • phone: +1.214.800.8900 illustro.com

Never mind what your mother told you. In your world of multiple platforms and databases, it’s not only okay to talk to strangers, it’s essential. That’s where z/XML-Host TM comes in. z/XML-Host makes it Easy for your mainframe 3270 applications to talk with any other platform using XML and SOAP-based Web Services. This includes .NET and Java applications–even Microsoft Excel can access your mainframe data Easily.

And since z/XML-Host runs directly on your mainframe, you can meet strangers on the same day you complete the Easy installation.

Visit illustro.com/strangers and learn more. Even download a fully-functional version of the software to try for Free.

Internet-enabling your mainframe with illustro’s z/XML-Host? Now you’re talking!

Copyright © 2008 illustro Systems International, LLC. All Rights Reserved.All trademarks referenced herein are trademarks of their original companies.

toll-free U.S. and Canada: 866.4.illustro (866.445.5878) • phone: +1.214.800.8900 illustro.com

Just like Mr. Shultz here, if you continue to use the ol’ 3270 interface, you might need to find cardboard and a permanent marker. Your 3270 screens make your users “think” you are using outdated technology. They expect an up-to-date interface or they just might take their business elsewhere. So if you don’t transform your 3270 screens quickly, you’re going to need to buy comfortable shoes.

With z/Web-Host TM you can transform any 3270 application into a rich, Web-browser inter-face that anyone can access and understand without training. And as the first solution to deliver AJAX support for mainframe applications, z/Web-Host can even transform your interfaces so they function like a Windows application–all from a Web-browser.

Avoid standing in a busy intersection. Simply visit illustro.com/cardboard today and experience a live demo of the software. We’ll even build a prototype of one of your applications absolutely FREE… within one week!

Page 3: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Copyright © 2008 illustro Systems International, LLC. All Rights Reserved.All trademarks referenced herein are trademarks of their original companies.

toll-free U.S. and Canada: 866.4.illustro (866.445.5878) • phone: +1.214.800.8900 illustro.com

Never mind what your mother told you. In your world of multiple platforms and databases, it’s not only okay to talk to strangers, it’s essential. That’s where z/XML-Host TM comes in. z/XML-Host makes it Easy for your mainframe 3270 applications to talk with any other platform using XML and SOAP-based Web Services. This includes .NET and Java applications–even Microsoft Excel can access your mainframe data Easily.

And since z/XML-Host runs directly on your mainframe, you can meet strangers on the same day you complete the Easy installation.

Visit illustro.com/strangers and learn more. Even download a fully-functional version of the software to try for Free.

Internet-enabling your mainframe with illustro’s z/XML-Host? Now you’re talking!

Copyright © 2008 illustro Systems International, LLC. All Rights Reserved.All trademarks referenced herein are trademarks of their original companies.

toll-free U.S. and Canada: 866.4.illustro (866.445.5878) • phone: +1.214.800.8900 illustro.com

Just like Mr. Shultz here, if you continue to use the ol’ 3270 interface, you might need to find cardboard and a permanent marker. Your 3270 screens make your users “think” you are using outdated technology. They expect an up-to-date interface or they just might take their business elsewhere. So if you don’t transform your 3270 screens quickly, you’re going to need to buy comfortable shoes.

With z/Web-Host TM you can transform any 3270 application into a rich, Web-browser inter-face that anyone can access and understand without training. And as the first solution to deliver AJAX support for mainframe applications, z/Web-Host can even transform your interfaces so they function like a Windows application–all from a Web-browser.

Avoid standing in a busy intersection. Simply visit illustro.com/cardboard today and experience a live demo of the software. We’ll even build a prototype of one of your applications absolutely FREE… within one week!

Page 4: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

2   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

a r t i c l e s

—————————————————————————————————————–6 Web application Firewalls and the Pci standard B y P e t e r S P e r a & r i c h a r d L a y e r—————————————————————————————————————–12 taking the secure Migration Path to it Virtualization B y i v a n W a L L i S—————————————————————————————————————–18 Peering into the iBM cics transaction Gateway Black Box B y S i m o n K n i g h t S , J i m h a r r i S o n & g r a h a m h a n n i n g t o n —————————————————————————————————————–27 z/Product Profile: Verastream Host integrator 6.6 B y d e n n y y o S t —————————————————————————————————————–30 a New Paradigm in capacity Management B y n e i L B L a g r a v e—————————————————————————————————————–36 Making Business sense of Your Network traffic B y W a r r e n J o n e S—————————————————————————————————————–43 z/Product Profile: FileMarvel by csi international B y d e n n y y o S t —————————————————————————————————————–44 Maximize Web services Performance Using MtOM/XOP support in cics transaction server V3.2 B y d a r r e n B e a r d , P h . d . —————————————————————————————————————–48 Going Back in time: How to leverage Data imaging for DB2 tables B y S u S a n L a W S o n & d a n L u K S e t i c h —————————————————————————————————————–59 aggregation of cics transactions With the service Flow Feature B y F r e d S t e F a n , B e n J a m i n S t o r z , & P a u L h e r r m a n n , P h . d . —————————————————————————————————————–65 Want to Know What DB2 is Doing? take a closer look at DB2’s trace B y W i L L i e F a v e r o—————————————————————————————————————–70 extending z/Os With linux: a Multi-Protocol File exchange Gateway B y K i r K W o L F & S t e v e g o e t z e—————————————————————————————————————–73 Workload Manager: common Myths B y g e r h a r d a d a m

—————————————————————————————————————–75 enabling Greater Business efficiency With linux on system z B y c h a r L e S J o n e S

c O l U M N s

—————————————————————————————————————–4 Publisher’s Page B y B o B t h o m a S —————————————————————————————————————–16 compliance Options: compliance conversations B y g W e n t h o m a S —————————————————————————————————————–28 linux on system z: Poetic license B y d a v i d B o y e S , P h . d . —————————————————————————————————————–40 Pete clark on z/Vse: limiting Your stored Mainframe Data risks B y P e t e c L a r K —————————————————————————————————————–42 z/Vendor Watch: the Familiar Face in the Mirror B y m a r K L i L L y c r o P—————————————————————————————————————–56 storage & Data Management: if it’s Not important, Don’t Back it Up! B y B r u c e F i S h e r —————————————————————————————————————–58 z/Data Perspectives: DB2 9 Data Format stuff B y c r a i g S . m u L L i n S—————————————————————————————————————–69 aligning it & Business: automation is the Key to DB2 success B y r i c K W e a v e r —————————————————————————————————————–80 it sense: Don’t Be the Dupe B y J o n W i L L i a m t o i g o —————————————————————————————————————–

c O N t e N t sa u g u S t / S e P t e m B e r 2 0 0 8 • v o L u m e 6 / n u m B e r 4 • W W W . z J o u r n a L . c o m

Page 5: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Under The Bright Light Of Examination The Proven Solution Shines Through

Optimize Performance and DB2 Memory Utilization With

2 8 1 H w y 79 M o r g a n v i l l e , N J 077 5 1Te l : 73 2 972 . 1 2 6 1 Fa x : 73 2 972 . 9 4 16We b : w w w. re s p o n s i ve s y s t e m s . c o m

RESPONSIVES Y S T E M S

Buffer Pool Tool® For DB2

zJournal Oct/Nov.qxd 10/1/04 1:57 PM Page 3

Page 6: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

zP U B l i s H e r ’ s P a G e

B O B T H O M A S

4   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

PublisherBoB [email protected]

associate Publisherdenny [email protected]

editorial Directoramy B. [email protected]

columnistsdavid BoyeS, Ph.d.Pete cLarKBruce FiShermarK LiLLycroPcraig S. muLLinSgWen thomaSJon WiLLiam toigoricK Weaver

Online services ManagerBLair [email protected]

copy editorsdean LamPmanPat Warner

art Directormartin W. [email protected]

Production ManagerKyLe [email protected]

advertising sales ManagerdeniSe t. [email protected]

————————————————————————— the editorial material in this magazine is accurate to the best of our knowledge. no formal testing has been performed by z/Journal or thomas communications, inc. the opinions of the authors do not necessarily represent those of z/Journal, its publisher, editors, or staff.————————————————————————— subscription rates: Free subscriptions are available to qualified applicants worldwide.————————————————————————— inquiries: all inquiries concerning subscriptions, remittances, requests, and changes of address should be sent to: z/Journal, 9330 LBJ Freeway, Suite 800, dallas, texas 75243; voice: 214.340.2147; email: [email protected].————————————————————————— For article reprints, contact Wright’s reprints at877.652.5295————————————————————————— Publications agreement no. 40048088Station a, Po Box 54Windsor on n9a 6J5canada————————————————————————— all products and visual representations are the trademarks/registered trademarks of their respective owners.————————————————————————— thomas communications, inc. © 2008. all rights reserved. reproductions in whole or in part are prohibited except with permission in writing.(z/Journal iSSn 1551-8191)—————————————————————————

z/Journal editorial review Board: david Boyes, Pete clark, Phyllis donofrio, Willie Favero, Steve guendert, mark S. hahn, chris miksanek, Jim moore, craig S. mullins, mark nelson, mark Post, eddie rabinovitch, greg Schulz, al Sherkow, Phil Smith iii, rich Smrcina, adam thornton

z/Journal article submission: z/Journal accepts submission of articles on subjects related to iBm mainframe systems. z/Journal Writer’s guidelines are available by visiting www.zjournal.com. articles and article abstracts may be sent via email to amy novotny at [email protected].

Karl Freund Brings Enthusiasm and Excitement to the System z

Last week I had the real pleasure of visiting with Karl Freund, IBM’s dynamic vice president of System z Marketing. Timing is everything, and he couldn’t have scripted his ascent to

head of System z Marketing in January 2008, at a better time. The IBM mainframe had been experiencing a resurgence since the introduction of the z9, but nothing compared to what has happened since the announcement of the much anticipated z10 this past February. As proof, new z10 systems are being shipped out to customers around the world as fast as they can be built. In an exclusive interview that appears in the current issue of z/Journal’s sister publication, Mainframe Executive, Karl Freund jumped at the opportunity to articulate the primary advantages of IBM’s System z:

•Outstandingperformance: The z10 provides 50 percent more performance than the previous generation z9.

•Virtualizationleadership: System z is the “gold standard” by which other virtualization hardware and software vendors measure their progress. IBM invented virtualization on the mainframe almost 40 years ago and is way out in front of the market in terms of depth, manageability, and sophistication.

•Consolidationfacilities: Thousands of Linux images can simultaneously run on one System z10.

•Manageability: Because the mainframe has been around for more than four decades, the level of manageability exceeds any other platform on the market. It can take fewer than half as many systems managers to manage a System z than a comparably configured competing system, saving substantial IT budget expenses.

•Security: There’s no other commercial system in the world that has attained an EAL5 security certification for logical partitioning. This means data can’t leak between operating system instances on the System z—which isn’t the case for several other vendors’ virtualization products.

•Energyefficiency: This is a huge advantage, especially over distributed systems architectures because System z uses highly efficient power supplies, water cooling that has a 3:1 heat dissipation advantage over air cooling, and a high-speed internal network bus that eliminates the need to install a myriad of Network Interface Cards (NICs) and the need to power energy-hungry external hubs and switches.

•Processingpower: The System z packs a tremendous amount of processing power into a small footprint. For markets where real estate costs are high and getting higher, and where data centers have maxed out in terms of available space, System z provides an excellent alternative to scaled-out distributed systems.

To read the complete interview with Karl Freund in Mainframe Executive’s online digital edition, go to www.mainframe-exec.com/articles/?p=47. I hope you enjoy this issue of z/Journal! Z

Page 7: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

C

M

Y

CM

MY

CY

CMY

K

zJournal_ControlM_ad_03.07.08.pdPage 1 3/7/2008 12:43:37 PM

Page 8: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

There was a time

when firewalls didn’t

exist and workstations or servers chugged along, safe

and sound. Times change and it would be unusual for an

enterprise not to have some form of network firewall or

firewalls at every network entry point and security zone

transition. In addition, personal firewalls are common on

almost every desktop and, in some cases, as an added

precaution or in conjunction with an Intrusion Detection

System (IDS) on a server. You might ask, “What more

could we need?” As technologies improve, new attack

vectors are discovered, explored, and exploited.

Staying one step ahead of the malicious user continues

to be a challenge for IT security professionals. Once net-

works were secured, malicious hackers moved on to their

next target: applications. Web servers, database servers,

file servers, print servers, etc., all providing and perform-

ing their faithful service, opened the door to previously

uncharted attack vectors. The servers themselves came

under attack from seemingly normal Internet traffic that

slipped through the watchful eye of the network firewall.

This spurred the creation of a new generation of technolo-

gies known as application firewalls. The first generation of

these included Web and database application firewalls.

Web Application Firewalls (WAFs) are unique in that they

monitor HTTP datastreams and are designed to protect >

6   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

By Peter Spera & Richard Layer

Web Application Firewalls and the PCI Standard

Page 9: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Can your critical application D/R software do this?

Your business continuity success depends on how quickly andcompletely you recover your critical applications and data.

OpenTech Systems is your single-vendor D/R solution provider for: � Application data � System data � Virtual data � DB2 data � Encrypted D/R data

For more information call 1-800-460-3011 or visit www.opentechsystems.com

© 2007 OpenTech Systems, Inc. All rights reserved.

Page 10: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Web-facing business applications que-ried through a Web server, the HTTP Daemon, designed to process any request it receives. By productively doing what it was designed to do, it becomes a tool for the attacker, by responding in some fashion to the attacker’s request. Over time, with many attempts, depending on the skill of the hacker, the Daemon may respond with new and better information that will allow an attacker to breach the Website. WAF security protection permits only valid HTTP code to enter the Web application for processing. This protects business logic against attacks and secures vital credit card, Social Security, and other personal information. Many early WAF technologies use a negative model, which collects signatures to be used to cancel or block future attacks. One drawback with this approach is that only known attacks or signatures can be detected. Other WAF technolo-

gies are designed with positive protec-tion models to stop attacks. This design is often preferred because it is proactive, signature files aren’t necessary, and the intended use of the Website becomes the negative response reference for repelling attacks. WAF security occupies the space behind the network perimeter defenses facing the Internet and in front of the Web environment, which includes Web servers with such Web-facing applications as the operating system (Linux, OS X, Solaris, Windows, etc.), the HTTP Daemon or server (Apache, IIS, IHS, etc.), the devel-opment or deployment framework (WebSphere, WebLogic, etc.) and the database management system (DB2, Oracle, SQL Server, MySQL, etc.). This simple flow and the required components are identified in Figure 1; this consolidat-ed deployment resides in the physically secure System z environment. Sophisticated WAF technology ana-

lyzes Open System Interconnection (OSI) Layer 7 traffic, specializing in the protection of Web application business logic. The security provided by WAF is different and functionally differentiated from network firewalls, encryption, and Intrusion Prevention/Detection Systems (IPS/IDSs), which have no effect on deceptive Internet Web traffic designed to penetrate weaknesses in Web applica-tion code. Web application security involves both proactive security, where traffic is checked against a set of rules (e.g., Website policy) and reactive secu-rity, where traffic is checked against a list of collected signatures of known security threats. WAF technology typically sits behind ports 80 (HTTP) and 443 (HTTPS) of the network firewall and monitors for deceptive or invalid Internet traffic that might enter a Website along with valid user traffic. The existence of a WAF is unknown to the Internet user. The deceptive traffic would attempt to iden-tify and exploit Web application code vulnerabilities to penetrate databases and bank accounts and affect financial theft, identity theft (via credit card information and Social Security num-bers) or corrupt Website procedures and information through various known and unknown techniques. These tech-niques take advantage of the stateless nature of the Web protocol (HTTP). The targeted business information is stored on various servers (database servers, transaction servers, Web serv-ers, etc.) behind the Website in the computer network. These fraudulent and criminal techniques are known as Web application attacks and can include the known methods shown in Figure 2. Over the last five years, many note-worthy and significant Web application breaches have made headlines. Professional hackers have penetrated financial institutions, retail organiza-tions, institutes of higher education, and state government transaction Websites,

8   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Figure 1: System z dmz WaF configuration

Attack Method ResultURL Parameter Tampering/Hidden Field Manipulation Gain access via impersonation

Forceful Browsing Bypass intended application flow

Cookie Poisoning Gain access and manipulate pricing, quantities, stocking units, etc.

SQL Injection Steal information from databases

Cross-site Scripting/Cross-site Request Forgery Steal customers via rerouting

Buffer Overruns/Stealth Commanding Take control of a Web server and other network computers

Figure 2: Known Web application attack vectors

Page 11: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

BZH_COR_P07376_Mainframe_Ad_ZJ_US.indd

5-5-2008 4:05 PM Eric Liu / Eric Liu

1

Job

Client

Media Type

Live

Trim

Bleed

BZH COR P07376

Brocade

Print

7 in x 9.71 in

8 in x 10.75 in

8.25 in x 11 in

Job info

Art Director

Copywriter

Account Mgr

Studio Artist

Production

Tony

Joe

Finny

Eric

Cara

Approvals

Any questions regarding materials, call Cara Wong (415) 273-7850

Notes

Z Journal USA

Pubs

FontsTimes (Regular), ITC Franklin Gothic (Medium, Book, Demi)

ImagesA14141x1C_150dpir.tif (CMYK; 147 ppi; 101.78%), Brocade_logolockupWHT.eps

Inks Cyan, Magenta, Yellow,

Black, Brocade_Red_485

Fonts & Images

Saved at 100%from Eric Liu’s Computer by Printed At

530 Bush StreetSan Francisco, CA 94108

It’s time to evolve our data center. But who has the

right vision?

THOUGHTS ON THE EVOLUTION OF THE DATA CENTER

© 2008 Brocade Communications Systems, Inc. All rights reserved. Brocade is a registered trademark, and the B-wing symbol and DCX are trademarks of Brocade Communications Systems, Inc.

INTRODUCING THE BROCADE DCX, A PERFECT COMPLEMENT TO IBM SYSTEM Z10.Leveraging 25 years of data center experience, the new Brocade®DCX™ Backbone helps maximize the value of your IBM System z investments. This powerful combination provides a strategic foundation for innovative data center services today—and for years to come. And now you can use Brocade Accelerator for FICON to meet your long-distance business continuity and global data mobility objectives. To learn more, get your DCX System z Whitepaper at www.brocade.com/systemz

J14141_2a_ZJ_US.indd05.05.08133 L/SHP

A14141x1D_280ucr.tif

Page 12: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

to name just a few groups. These tar-geted groups take advantage of the con-venience of credit card payment or use Social Security numbers for citizen and student identification or collect other Personally Identifiable Information (PII). As a result, the Payment Card Industry (PCI) in September 2006 formed the Security Standards Council (see www.pcisecuritystandards.org), which was tasked with updating its Data Security Standard (DSS). The updated DSS is comprised of 12 general require-ments designed to address the following six principles:

•Buildandmaintainasecurenetwork•Protectcardholderdata•Maintainavulnerabilitymanagement

program•Implementstrongaccesscontrolmea-

sures•Regularlymonitorandtestnetworks•Maintain an information security pol-

icy.

In addition to other enhancements, Version 1.1 of this standard now includes Web application attacks. PCI DSS Version 1.1 specifically iden-

1 0   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

ProteCtIon AFForded:☐ WAF designed around Open Web Application Security Project (OWASP) guidelines— Top Web Application Attack Methods (PCI DSS Section 6.5)☐ Positive Security Model; no signature files needed☐ Support for target deployment platform, such as IBM System z Certification☐ Rapid Deployment; hours vs. days or weeks☐ Advanced JavaScript Interpretation with capability to support Document Object Model (DOM) Level 1☐ Protects SOA custom code and legacy code vulnerabilities (no patches).

ComPlIAnCe delIvered:☐ Full compliance with PCI Data Security Standard (V1.1)☐ Regulatory compliance that exceeds government guidelines for GLBA, HIPAA, and SOX.

mAIntenAnCe/CoSt oF oPerAtIon:☐ Fully automatic mapping of business logic using site Intended Use Guidelines, including installation and configuration; no special training required☐ Automatic policy updates; no reconfiguration required as site content changes☐ Passive mode operation; permits customer Web assets to be used for attack identification; quickly switches to active mode☐ Advanced reporting☐ Small footprint; minimal effect on throughput☐ Low cost of ownership.

Figure 3: WaF checklist

Staying one step ahead

of the malicious user

continues to be a

challenge for It security

professionals.

Page 13: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

tifies a concern for the secure coding of Web applications in section 6.5 (as part ofRequirement6:Developandmaintainsecure systems and applications), which addresses the principle issue to maintain a vulnerability management program. The safe handling of sensitive informa-tion is paramount. Ensuring operating systems and applications are at the latest patch levels is important, but validating that Web applications have been correct-ly coded and are error-free can be a daunting task. Section 6.5 specifically points out potential attack vectors, some of which are identified in Figure 2. Given the number of known attacks and the vast potential for new attacks and applica-tion coding vulnerabilities, protecting these Web applications is critical. In addition to code reviews for custom applications, section 6.6 specifically identifies the use of an application layer firewall, installed in front of Web-facing applications, as a means to protect sen-sitive information and systems. Currently, deploying a WAF is consid-ered a best practice according to the PCI standard. It will soon become one of the possible requirements for pro-tecting Web-facing applications. This standard can be used as a framework for a robust security process that can be implemented in conjunction with a well-developed security policy to pre-vent, detect and react to security inci-dents, including Web application incidents. (View the complete PCI DSS Version 1.1 standard at www.pcisecurit-ystandards.org/pdfs/pci_dss_v1-1.pdf.) The PCI DSS has brought to light the necessity for Internet retailers and commercial suppliers, whether private or public sector, to apply due diligence to their custom Web application code or install an application layer firewall (layer 7) by Aug. 1, 2008, to protect any expo-sure of customer credit card informa-tion. The latter choice is possibly the most expedient, most effective, and low-est-cost solution for complying with this portion of the PCI DSS. The PCI DSS applies to all Web hosting environments from Intel to IBM System z hardware. As enterprises real-ize the cost and environmental savings associated with their consolidation efforts, they’ll also discover the security benefits gained by deploying a tradi-tional multi-tier distributed architecture in the System z hardware. Network flows that were previously vulnerable are now contained in an easily configu-rable, physically secure, auditable envi-

ronment. The required tooling for both network firewalls and WAF are available in the System z hardware to protect Web applications running in this envi-ronment. This is especially important, given the consolidation of distributed architectures and growth of e-commerce on the System z. The proliferation of Linux guests in virtual partitions with z/VM or Logical Partitions (LPARs) with ProcessorResource/System Manager (PR/SM)will continue to expand the hosting of commercial and retail Internet busi-ness Websites, given the security, effi-ciency, and competitive cost of hosting Websites on the System z. Network firewalls are included as part of stan-dard Linux distributions. Both net-work firewalls and WAF are available from several vendors. When considering the importance of WAF in an existing, end-to-end solu-tion or while architecting a new Web application solution, it’s important to investigate all aspects of the solution, such as ease of auditing, regulatory compliance, and any standards that might apply to the deployment. The checklist of WAF attributes in Figure 3

can be used as a guide to help identify and evaluate potential products cur-rently available. Internet retailers and Internet com-mercial suppliers who seek to deploy Web-facing applications (transactional or static Websites) as a significant part of their sales and distribution chain should consider leading network fire-wall and WAF products to fully comply with the PCI DSS. WAF products that meet most or all the attributes shown in Figure 3 offer excellent value and a high level of protection. Z

About the AuthorsPeter SPera is a senior software engineer with IBM Corp. He is focused on security for Linux on the System z, but also is involved with other areas such as system integrity and vulnerability reporting for System z. Email: [email protected]

richard Layer is a vice president of Marketing with webScurity Inc. (see www.webscurity.com for information on webApp.secure). He has spent more than 24 years with 3M Company, including its Data Recording Products Division before it was spun off as Imation, Inc. He has owned and developed a direct response business that used 100 percent credit card payment and pioneered the use of Internet commerce beginning in 1996. Email: [email protected]

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   1 1

Page 14: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

V irtually every new technology that has significantly enhanced enterprise IT has been adopted in phases. Early

IT adopters are quick to capitalize, with some reaping the advantages of a first mover posi-tion. Further into this phased approach is the mainstream, and those in this category take their time and wait for bugs to be worked out, prices to drop, and risks to be all but elimi-nated. IT managers at the back-end of this curve are inevitably playing catch-up. Somewhere in the middle of the adoption cycle is a tipping point—the critical point in an evolving situation that leads to a new, irrevers-ible environmental change. The time it takes to reach that point will vary—sometimes >

1 2   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Taking the Secure Migration Path to IT Virtualization

By Ivan Wallis

Page 16: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

months; other times years. But when it happens, it’s significant and noticeable. Such appears to be the case with IT virtualization, which Wikipedia refers to as the abstraction of computer resources. Virtualization hides the phys-ical characteristics of computing resources from their users. This includes making a single physical resource, such as a server, operating system, applica-tion, or storage device, appear to func-tion as multiple virtual resources. It also can include making multiple physical resources, such as storage devices or servers, appear as a single virtual resource. This isn’t a new concept. IT practi-tioners have been mapping out plans and strategies that could be described as virtualization for years. Today, virtual-ization is at, or fast approaching, its tip-ping point. Just about every major system and application vendors’ Website includes discussion of their virtualiza-tion solutions. One can hardly pick up an IT magazine or read a blog without seeing some discussion of the benefits and challenges of desktop, server, and data center virtualization. In assessing virtualization, as with

any new technology, prudent IT man-agers won’t jump head first into proj-ects without doing some careful planning. This is particularly impor-tant when it comes to IT security. Too often, IT security is treated as an after-thought to deployment of virtualiza-tion technology. That can have serious consequences. Data security should be at the fore-front of any new enterprise IT virtual-ization initiative. IT managers must ensure they’ve explored every avenue and that their current security measures are strong and flexible enough to adjust to dramatic changes in the way users interact with critical data. They must implement a data security solution and associated policies that will protect their corporate assets in a virtualized com-puting environment.

The Virtual Era As enterprises focus on reducing IT expenses without compromising IT capabilities, they’re realizing the bene-fits of transitioning to virtual comput-ing environments. Here are a few strong arguments for making this shift:

•LoweroverallITcosts: Cost efficien-cy has always been an IT priority, but doing more with less is becoming standard operating procedure as more companies face difficult global eco-nomic realities. So IT managers are exploring virtualization to help lower IT costs. Desktop virtualization lets organizations get more out of existing hardware, no mater where it’s located, and helps businesses easily manage multiple laptops or PCs.

•Reduced energy use: Energy conser-vation is a well-publicized issue these days and it affects businesses as much as individuals. By incorporating virtu-alization technologies and replacing power-hungry PCs and servers with energy-efficient thin clients and virtu-al servers, enterprises can increase output while simultaneously decreas-ing energy consumption.

•Improved flexibility and remoteaccess: As the mobile workforce grows, more employees are remotely accessing information from the data center. By adopting virtual environ-ments, IT managers can monitor and maintain only a few central data loca-tions as opposed to dozens, helping them efficiently manage multi-user access and ensure better security.

•Simplifiedcomputingmodel: Instead of increasing the infrastructure’s

capacity by adding workstations, serv-ers or memory capacity, IT managers can configure a flexible, centralized environment through virtualization. Consolidating servers ultimately enables an organization to significant-ly simplify administrative tasks and costs and align its business goals with its IT processes.

Aligning Security and Virtualization Requirements While virtualization can be broadly beneficial, IT security must remain a priority. Since the mainframe has tradi-tionally been the keeper of the most sensitive corporate data assets, and because it continues to play a pivotal role in enterprise computing today, it’s essential for the enterprises transition-ing to a virtual environment to protect and sustain mainframe-resident finan-cial, operational, and customer data. It’s challenging to align business requirements, IT virtualization, and IT security. Using industry approved secu-rity protocols will alleviate some techni-cal pains and help IT managers quickly accomplish business goals. To ensure a seamless transition to virtual environ-ments, IT managers must remember these security commandments:

•End-to-end communications securi-ty: It’s just as important to encrypt data in transit in a virtual environ-ment as it is in a “traditional” environ-ment. Securing files and data transmissions from the server to all workstations, and from the worksta-tions back to the server, provides sig-nificantly better security for all enterprise data.

•User authentication: With increased remote access to enterprise informa-tion held in the data center, it’s critical to ensure this data remains where it belongs and that only appropriate users can easily access it. When orga-nizations implement desktop virtual-ization, they must take the proper steps to authenticate the host and cli-ent machines, in addition to authenti-cating the user through ID, password, or other means. This will prevent access from non-secure locations and make it more difficult for unauthor-ized users to take advantage of stolen IDs and passwords. It also enables easier tracking if an unauthorized entry occurs.

•Logging capabilities: Most main-frame systems and applications have extensive logging features. However, if

1 4   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Page 17: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

an existing mainframe system lacks logging capabilities, it’s imperative for the IT manager to obtain this before transitioning into a virtual computing system. It’s essential to meticulously record information regarding who accessed data and when. By acquiring adequate logging procedures or modi-fying existing applications, IT manag-ers can ensure data is correctly maintained and organized on the mainframe if an audit should occur.

•Central management: Since many administrative tasks, such as provi-sioning, auditing and maintenance, can be mundane and time-consum-ing, setting up automated capabilities will relieve IT managers of the over-whelming burden that comes with trying to manually handle these tasks. Incorporating technologies that let IT managers establish and maintain an enterprisewide security solution from one central location will simplify their tasks and help them identify security violations faster. In addition, central-ized management provides scalability for large networks, reduces ongoing operating costs, and facilitates regula-tory compliance.

•Continuedcompliancy: Organizations making the transition to a virtual envi-ronment must still comply with gov-ernment regulations pertaining to data security. Existing and emerging priva-cy, security, auditing, and risk man-agement regulations and standards, such as the Sarbanes-Oxley Act (SOX), the Payment Card Industry Data Security Standard (PCI DSS), and the Federal Information Security Management Act (FISMA), are designed to help enterprises protect their data from more frequent, highly developed security threats or attacks, no matter what type of computing environment or platform they use.

Look Before Leaping For efficiency-minded organizations, IT virtualization is an increasingly viable solution. It can deliver dramatic improve-ments, including a simplified computing model, reduced energy consumption, increased flexibility, and lower IT costs. However, migrating to a virtualized data center can be a complicated, time-con-suming process, particularly for hetero-geneous enterprise IT environments with mainframe and client/server sys-

tems running scores of complex applica-tions. The virtualized environment, like other system architectures, faces a host of new security threats. Before pursuing virtualization, every enterprise must take time to outline a parallel data secu-rity migration path. The last thing any IT manager can afford is to be caught off-guard. During the transition to a virtualized comput-ing environment, they should carefully weigh the benefits and impending secu-rity threats before committing to IT virtualization. Without ensuring that company, client and customer data will be secured at all times in their new vir-tualized IT model, enterprises are set-ting themselves up for a potentially catastrophic breach. Z

About the Authorivan WaLLiS is a senior engineer for SSH Communications Security who provides customer training on SSH’s Tectia solution. He has extensive knowledge in X.509 certificate and Public Key Infrastructure (PKI). He holds a Bachelor of Computer Science degree from Carleton University in Ottawa, Canada. Before joining SSH, he worked on software and security toolkit integration for Entrust in Canada.Email: [email protected]: www.ssh.com

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   1 5

DINO-Sentinel_Dog_zJournal halfP1 1 3/10/2008 10:58:08 PM

Page 18: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

1 6   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Compliance OptionsgWeN thOMAs

Compliance Conversations

Let’s start with a set of assumptions: You know more about your job and your environment than most other people do. You know what you do, and why, and why you’ve

chosen not to do it differently. You understand some impacts of your choices that others would probably miss. If these assumptions are correct, what about when conditions change—such as with the introduction of new compliance requirements? Surely you’re in a better position than most to understand the impacts of the compliance options open to you. And so, this month’s column is about understanding those impacts. It’s about treating the introduction of a compliance requirement as a conversation, in which you first hear a requirement and then ask questions that help illuminate background and details. Done right, such a conversation can help you choose the compliance options that are right for you—and then explain the impacts of your choices. Following are three questions you might pose during such a conversation:

1. Are we supposed to: a) Stop doing something b) Start doing something new c) Do something differently?

You’ll learn a lot by the answer to this question. If you get a simple, quick answer, then chances are the requirements-setters understand your work (or think they do). If not, there’s a chance they don’t understand your efforts, or they don’t really understand the requirements themselves, or it’s a complicated requirement. You’ll probably want to find out which condition is true and react accordingly.

2. Why are we being required to do this?

You may not have to ask this question if your specific requirement is presented as being part of the X project, which is designed to ensure compliance with the Y section of the Z law, regulation, or directive. But if your specific compliance requirement isn’t presented this way, you need to uncover the rationale behind what’s being requested of you. You need this traceability, so you can place your efforts within the proper context. You also need it in case you want to explore your compliance options with peers across the industry.

3. Is our requirement designed to: a) Increase the availability of information

b) Restrictaccesstoinformationc) Monitor the flow of information d) Assess the accuracy of information?

This is a very simple question, and the answer also should be simple. Maybe the answer will be “none of the above.” But if it’s any of the four reasons given here, chances are this one requirement could have far-reaching impacts within your area. You need to understand the implications of the requirement so you can proactively manage its impact. First, you need to know which “side of the scale” to place the requirement on. As we design our systems, our processes, and our controls, we’re constantly adjusting details so we can maintain the proper balance between flexibility and stability/control. And so, for example, if the essential purpose of the requirement is to increase business flexibility by providing access to more information to more people, then chances are you’ll have to balance this requirement by adjusting controls to detect improper usage. On the other hand, if this requirement is going to restrict access to information, then you’re going to have some unhappy business users. How will their needs be met? If the purpose of the requirement is to support a control that monitors information flow or assesses its accuracy, you now can conclude two things: a) the requirement will be part of a larger set of efforts, and b) these controls effectively ask questions that, once answered, trigger chains of events. You’ll want to know whether your teams, systems, or processes will be affected by any of these triggers and responses, and you’ll want to determine whether you have existing controls that could achieve the goals of these new ones. Find out the reasons behind what’s being required of you. At the very least, you’ll be better prepared to explore your own compliance options. And who knows? Maybe you’ll discover the need has already been met. Z

About the AuthorGWen thomaS is president of The Data Governance Institute and publisher of its Website at www.DataGovernance.com, of the DGI Vendor Showcase at www.DataGovernanceSoftware.com, and of SOX-online at www.sox-online.com, the Vendor-Neutral Sarbanes-Oxley Site. She has designed and implemented data governance and compliance programs for publicly traded and private companies across the U.S. and is a frequent presenter at DAMA, Institutional Investor forums, and other industry events. She is the author of the book Alpha Males and Data Disasters: The Case for Data Governance.Email: [email protected]: www.datagovernance.com

Page 19: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Visit www.ssh.com to register for our SSH Tectia Webinar Series and come visit us at SHARE West in booth #105– See you there!

Protect or eliminate IBM z/OS FTP to file transfers and data-in-transit at lower cost in just hours or days,

not weeks! All without any modifications to your existing applications, scripts, or infrastructure!

Whether driven by risk remediation plans or regulatory compliance deadlines, SSH Tectia is the ideal

solution for securing file transfers and data-in-transit between IBM Mainframe z/OS, AIX, other

UNIX, Linux, and Windows systems.

Why trust your mission-critical data to unsupported utilities or costly solutions? SSH Tectia is a

cost-effective secure file transfer solution that protects file transfers, data in transit, and TN3270

connections directly to and from native MVS datasets, all with expert professional support.

So, secure your z/OS file transfers and data-in-transit now. For more information, go to www.ssh.com.

Find out why SSH Tectia is trusted and deployed in many Fortune 500 companies, including 7 of the

world’s 10 largest financial institutions and 5 of the world’s largest retailers.

Secure Mainframe File Transfers ... Now!

Securing Data-in-Transit

SSH Z/Journal Ad Final.indd 1 7/11/08 8:18:58 AM

Page 20: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

B y S I M o n K n I g h T S , J I M h a r r I S o n & g r a h a M h a n n I n g T o n

1 8   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Peering Into the IBM CICS Transaction Gateway Black Box

Page 21: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Y ou need to put more work through your IBM CICS Transaction Gateway (CICS TG) for z/OS systems but you also need to moni-

tor and audit them to ensure they continue to deliver the Quality of Service (QoS) your growing business requires. How can you do this? Help may be at hand with performance moni-toring features in new releases of CICS TG and IBM CICS Performance Analyzer (CICS PA) for z/OS. This article describes the new statistics feature of CICS TG, discusses enhancements to the CICS PA product that provide support for CICS TG sta-tistical data, and analyzes a real-life scenario to show how to use the new statistics to diagnose and resolve a performance bottleneck.

CICS TG and the New Statistics Infrastructure CICS TG is a mature mainframe product that’s been available, under various guises, for more than 10 years. It’s widely used to provide remote access to mission-critical transaction processing systems from Java and Java 2 Extended Edition (J2EE) applications. Before CICS TG V7.0, it was difficult to access real-time information online about CICS TG per-formance. To address this issue, CICS TG V7.0 introduced 36 statistics and a system monitoring infrastructure that lets you retrieve online statistics by issuing a z/OS system MODIFY command against the Gateway daemon address space, or

programmatically via the statistics Application Program Interface (API). CICS TG V7.1 extended the number of statistics to more than 100. These statistics provide real-time metrics on key operations and the current status of Gateway daemon instances. This opened up the “black box” and made it possible to gain insight into the operations occurring in the CICS TG. CICS TG V7.1 also added support for writing statistics to System Management Facility (SMF) as type 111 records.

CICS PA CICS PA is a powerful offline reporting tool that helps programmers and administrators ana-lyze the performance of their CICS systems. CICS PA produces reports and extracts from SMF records in sequential data sets created by the SMF dump program, IFASMFDP. CICS PA interprets SMF records written by:

•IBMCICSTransactionServer(CICSTS)forz/OS (type 110 records)

•SystemLogger(88)•IBMDB2andIBMWebSphereMQaccounting

(101 and 116, respectively)•IBMTivoliOMEGAMONXEforCICS(112).

A recent update to CICS PA V2.1, described by APARPK53163,introducessupportforCICSTGstatistics (SMF type 111 records). This enables >

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   1 9

Peering Into the IBM CICS Transaction Gateway Black Box

Page 22: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

CICS PA users to report on the perfor-mance of both CICS TS and CICS TG using the same SMF data source.

CICS TG and Statistic Resource Group Components CICS TG consists of several inter-connected components, with the main one being the Gateway daemon, which listens for network requests from Java and J2EE client applications and routes them onto CICS server regions. Client applications connect to the Gateway daemon using TCP/IP or Secure Sockets Layer (SSL) protocol handlers. These connections are man-aged by a connection manager com-ponent using a pool of threads. Connections from the Gateway dae-mon to the CICS server regions are allocated from a Worker Thread (WT) resource pool. For more information on the Gateway daemon and recom-mended architectures and topologies, see the books and whitepapers refer-enced at the end of this article. These new statistics provide infor-mation about the CICS TG components and protocol handlers. They also pro-vide data on host system environment and connected CICS systems. These statistics are grouped into “resource groups” corresponding to the compo-nent for which they provide informa-tion. Figure 1 shows the CICS TG

components and their resource group IDs. Later, we’ll discuss several statistics in more detail in the real-life problem scenario. Details of all the available sta-tistics can be found in the CICS TG information center.

Types of Statistical Data When CICS TG introduced statis-tics, it opened the “black box” by pro-viding a window into the operation and status of the Gateway daemon. This insight gives you the capability to perform several activities such as problem determination, capacity plan-ning, monitoring of resource usage, and system tuning. The statistical data provided to help perform these activi-ties consists of current status informa-tion about activities in the Gateway daemon, running totals and averages accumulated during the lifetime of the CICS TG, and the start-up configura-tion values. Accordingly, statistics are categorized into types as described in Figure 2.

Real-Time CICS TG Statistics Function When an incident occurs, you can now react by analyzing statistical data obtained during the problem period. You can use a z/OS MODIFY command to immediately display CICS TG statis-tics using the Spool Display and Search Facility (SDSF). The command syntax

lets you inquire on individual statistics and on all statistics in specific resource groups. The CICS TG information cen-ter contains details on the command syntax. CICS TG V7.1 also lets you filter the information output for statistics of a specific type. Here are some MODIFY commands:

Display all statistics in the Connection Manager (CM) and WT groups:

/F jobname,APPL=STATS,GS=CM:WT

where jobname is the name of the Gateway daemon address space.

Display all interval statistics in the CM and WT groups (Figure 3 shows the output from this command):

/F jobname,APPL=STATS,GS=WT:CM,ST=I

CICS TG Offline Statistics Recording Function The offline statistics recording complements real-time statistics by recording all statistics values to the SMF at configurable intervals. Interval statistics are reset to their default val-ues at the end of each interval. This makes it possible for you to measure the workload and status of the Gateway daemon in a specific time in the past.

2 0   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Figure 1: cicS tg components and Protocols in our real-Life Scenario and their associated Statistics resource group ids in this scenario, we used the iPic protocol. We could have used eXci.

Page 24: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

As we’ll see in the real-life scenario, when combined with specialized anal-ysis tools such as CICS PA, SMF recording is an extremely powerful tool for retrospectively diagnosing problems and for tuning system per-formance based on real workloads. CICS TG writes statistics to the new SMF record type 111. The statistics val-ues are stored in a collection of fixed-length binary data structures. For details on the record format, see the CICS TG information center. You can run a sam-ple application, which ships with the CICS TG, to convert the records into a “human-readable” text format. Alternatively, you can use CICS PA to analyze the records in detail. You configure the CICS TG to write SMF records by specifying the param-eter,STATSRECORDING,intheCICSTG configuration file and by permit-

ting READ access to the BPX.SMFFACILITYclassinRACF.Youalsocanconfigure the frequency that records are cut by using parameters similar to the CICS TS system initialization parameters STATINT and STATEOD. Being able to align CICS TG and CICS TS statistics intervals using these parameters can be useful in configur-ing similar monitoring policies for comparison purposes. The following example parameters configure the Gateway daemon to write a record every hour and at 21:00, the end of the business day:

statsrecording=on

statint=010000

stateod=210000

CICS PA Support for CICS TG Statistics CICS PA provides the following sup-port for analyzing CICS TG statistics (see Figure 4):

•Data reduction: For long-term reporting, you can collect CICS TG statistics (together with CICS TS sta-tistics) in a CICS PA Historical Database (HDB), so you don’t need

to keep large SMF data sets.•FormatteddisplayinISPF: The CICS

PA ISPF dialog presents a formatted display of CICS TG statistics, either from an SMF data set or a CICS PA HDB, with online help describing each statistics field. You can save the for-matted displays to a text file.

•Export to spreadsheetsorDB2: You can export data from CICS PA HDBs to either Comma-Separated Value (CSV) files, for use with applications such as PC-based spreadsheet soft-ware, or DB2 tables, so you can use SQL queries to create custom reports.

Working With CICS TG Statistics in CICS PA To view a formatted representation of CICS TG statistics in a dumped SMF data set or CICS PA HDB, you specify the data set or HDB name, select a sta-tistics interval (from a list you can filter by a combination of date, APPLID, and interval collection type), and then the CICS TG statistics resource group that you want to view: CICS PA displays the statistics (see Figure 5), with detailed descriptions available for each field. To collect CICS TG statistics in a

2 2   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Figure 4: cicS Pa Support for cicS tg Statistics (SmF 111) records

Statistics DescriptionType______________________________current (c) the present status of the gateway daemon______________________________Lifetime (L) totals accumulated since the gateway daemon started______________________________interval (i) * equivalents of lifetime statistics, but reset at a defined time interval______________________________Startup (S) configuration settings of the gateway daemon______________________________* interval statistics are new in cicS tg v7.1.

Figure 2: Statistics types

RESPONSE=lpar BPXM023I (userid) CTG8239I Response received from CICS Transaction Gateway CM - Connection manager CM_ITIMEOUTS=0 (Number of times connect time out reached) CM_IALLOCHI=150 (Peak number of allocated connection manager threads) CM_ICREATED=0 (Number of connection manager threads created) CM_IALLOC=0 (Number of times a connection manager thread was allocated) WT - Worker thread WT_ITIMEOUTS=15 (Number of times worker time out reached) WT_IALLOCHI=10 (Peak number of allocated worker threads)

Figure 3: cicS tg Statistics displayed by the z/oS modiFy command

Page 25: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Now it’s easier than ever to secure, monitor and manage FTP traffic. With SDS FTP Manager (SFM) you get easy, drop-in security, authentication,

and encryption with no JCL changes. You also get a real-time look at FTP traffic from one vantage point. For even more security, SFM makes it

easier to implement SSH Tectia, with increased usability, increased integration, and transparent encryption for file transfers – while ensuring

Sarbanes-Oxley and HIPAA compliance. Find out what you can do with SDS FTP Manager by calling 800-443-6183 or visiting www.sdsusa.com.

SDS FTP MANAGER

ad comp.indd 1 6/11/08 10:46:50 AM

Page 26: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

CICS PA HDB, you first use the CICS PA ISPF dialog to define an HDB and select the statistics resource groups you want to collect in the HDB. Then you submit a batch job—containing Job Control Language (JCL) that CICS PA generates for you—to load data from a dumped SMF data set into the HDB. You can either use the CICS PA ISPF dialog each time you want to submit a job to load an HDB, or you can use the JCL generated by CICS PA as a starting point for your own jobs. To export CICS TG statistics to CSV files (for use in PC-based spreadsheet software) or to DB2 tables, you need to specify the intervals that you want to

export from an HDB, and the CICS TG statistics resource groups required. Then you would submit a job to perform the export. For export to DB2, you can use CICS PA to generate JCL for you that, in a single job:

•LoadsSMF111recordsfromanSMFdata set to an HDB

•CallstheDB2loadutility,DSNUTILB,to load those records from the HDB into DB2.

CICS PA also generates JCL for you containing Data Definition Language (DDL) to create the DB2 tables.

Charting CICS TG Statistics in Excel You can transfer the CSV files gener-ated by CICS PA from z/OS to your PC, and then use them with spreadsheet software such as Microsoft Office Excel to create charts and perform further analysis. C IC S PA Sup p or t Pa c C P 1 2 , “Charting historical CICS performance data” (available from the Web at no charge), contains an Excel add-in that creates interactive charts from CSV files generated by CICS PA (see Figure 6). If your z/OS system allows job submission via File Transfer Protocol (FTP), then, with a single button click, you can use the add-in to:

•SubmitaCICSPAbatchjobtogener-ate a CSV file

•TransfertheCSVfiletoyourPC•DisplayachartofthedataintheCSV

file.

Analysis of a Real-Life Scenario Imagine the following scenario: Your Gateway daemons are tuned to achieve an optimum balance between expected Transactions Per Second (TPS) and storage requirements. Usually, performance meets expecta-tions. However, users report slow response times and even time-out errors during periods of peak work-load. To make things worse, the peak workloads occur at a time when no one is available to diagnose the problem online using the real-time statistics. Could the offline statistics recording and analysis functionality help you diagnose and resolve the problem? We set up our test systems to emulate a similar scenario based on the configu-

2 4   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Figure 6: displaying cicS tg Statistics in the cicS Pa iSPF dialog

Figure 5: charting cicS tg Statistics using the excel add-in Support Supplied With the cicS Pa SupportPac

Page 27: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   2 5

Figure 7: timeline charts of cicS tg Statistics From a real-Life Scenario tPS was calculated by dividing the number of transactions (gd_iaLLreQ) by the interval duration (5 × 60 seconds).

Page 28: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

ration shown in Figure 1. We set statistics recording on with a recording interval of five minutes. The system was tuned to handle a standard workload with periodic peaks in TPS. The CICS transaction we chose to invoke issued a delay call and took 100 ms to complete. We ran an application simulator to drive a standard workload. During our test, we varied the TPS that the simulator drove to rates both below and above the optimal TPS, causing peaks and troughs in the work-load. At one point, user response times increased above the expected value and several requests timed out. At the end of the test, we offloaded the SMF records and used CICS PA to investigate the unexpected behavior. The initial phase of our problem deter-mination approach was to look at an over-view of the recorded results to isolate:

•Thestartanddurationoftheproblem•Theproductresponsiblefortheslow-

down (the CICS TG, CICS TS or z/OS).

We used the Timeline Chart Excel add-in provided by CICS PA SupportPac CP12 to chart statistical data such as: average response times, time-out counts and peak thread usage indicators, against time. This is an excellent method for gaining an initial overview of the prob-lem. Figure 7 shows the juxtaposition of several charts created by this add-in of key CICS TG metrics. The chart at the top of the figure shows the TPS that the simulator attempted to drive. The chart of the CICS server average responsetime(CS_IAVRESP)remainedconstant at 0.101 seconds for the dura-tion of the test. So the CICS server per-formed consistently and was not responsible for the uplift in user response time. However, the chart of the Gateway daemon average response time (GD_IAVRESP)variedwiththechangein TPS. Accordingly, our focus switched to the CICS TG. Initially, the response time of the Gateway daemon matched the quick response of the CICS server. At 14:09, the Gateway daemon average response rate increased (0.125 seconds) but the increase was within tolerance, as user response times remained acceptable. At 14:29, the Gateway daemon aver-age response rate rose sharply to nearly 0.9 seconds and remained at this level for approximately 20 minutes. This period coincided with:

•The Gateway daemon timing out

transactions waiting for a free WT (WT_ITIMEOUTS)

•A peak in the number of connectionmanagers waiting for WTs (CM_WAITING).

After 20 minutes, at 14:49, the time-outs stopped occurring and shortly afterward, the GD response time returned to an acceptable level. Interestingly, the chart of the TPS mea-sured shows that it peaks at a little under 100 TPS, but we know the appli-cation simulator was attempting to drive around 250 TPS. This illustrates that the system isn’t matching the demand from the client application. By analyzing the charts in this way, we isolated the time when a problem occurred. Further analysis concentrated on statistics produced by CICS TG com-ponents during this timeframe. We looked at the WT group first because Figure 7 showed an increase in the number of CM threads waiting for WTs at the same time as time-outs and high response times were occurring. Figure 6 shows a CICS PA screen detail-ing the WT statistics at 14:34. It shows that during this interval:

•The highwatermarknumber ofWTsallocated (WT_IALLOCHI) is 10.

•TheGateway timedout25 times try-i n g t o a l l o c a t e W Ts ( W T _ITIMEOUTS).

•TheGateway isconfigured toallowamaximum of 10 workers (WT_SMAX).

These findings point to an insuffi-cient number of WTs to process peak workloads. To verify our diagnosis, we increased the number of WTs, restarted the Gateway daemon, and reran our tests. When the workload was increased to 250 TPS, the Gateway daemon aver-age response time (GD_IAVRESP)stayed constant at 101 ms. The problem was resolved. At peak workload, we looked at the WT usage via this MODIFY command:

/F jobname,APPL=STATS,GS=WT

We found that 49 out of our new maximum of 100 WTs were allocated, so we knew we had spare capacity should the workload further increase.

Conclusion This sample scenario demonstrates that the statistics capability in the latest releases of the CICS TG can provide

you with the necessary information to diagnose and resolve performance prob-lems. It also can give you a factual basis to recommend and justify which con-figuration changes are necessary to resolve the problem, giving a degree of confidence that the changes will work. We also demonstrate that you can resolve a problem, which occurred when nobody was available to monitor the Gateway daemon operation, retrospec-tively using recorded data. Long-running systems can produce a large amount of data. We saw that by combining traditional CICS PA func-tions with the charting SupportPac, you could gain an overview of the available information, quickly isolate which sys-tem was responsible for the slowdown, and identify the period of time when it occurred. This speeds up the initial stages of the problem diagnosis and helps you focus your attention on the portion of the data containing the rele-vant information. Z

References•The CICS Transaction Gateway V7.1

Information Center: http://publib.boulder.ibm.com/infocenter/cicstg/v7r1m0/index.jsp

•“Exploring Systems Monitoring forCICS Transaction Gateway V7.1 for z/OS”: www.redbooks.ibm.com/red-pieces/abstracts/sg247562.html

•“Integrating WebSphere ApplicationServer and CICS Using the CICS Transaction Gateway”: www.ibm.com/software/htp/cics/ctg/library/#wpapers

•CICSPASupportPacCP12:ChartingHistorical CICS Performance Data: www.ibm.com/support/docview.wss?uid=swg24011321.

Acknowledgements The authors wish to acknowledge Adham Sidhom, Andy Wright, Chris Baker,ColinWestlake,PhilWakelin,RobJones,RichardDavis,andSteveBurghardfor their help reviewing the article and providing advice on its technical content.

About the AuthorsSimon KniGhtS is an IBM software developer. He has worked extensively on the development of the CICS Transaction Gateway on z/OS.Email: [email protected] harriSon is an IBM software support specialist with more than 27 years supporting and maintaining CICS products.Email: [email protected] hanninGton works for Fundi Software in Perth, Western Australia. He’s the developer of the CICS PA SupportPac CP12.Email: [email protected]

2 6   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Page 29: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   2 7

By Denny Yost

Verastream host Integrator 6.6New Release Further Aids Legacy Participation in SOAs

Today’s IT organizations

need maximum agility to meet the dynamic needs of the business. To achieve maximum agility in a cost-effective manner, IT organizations are implementing Service-Oriented Architecture (SOA). By implementing SOA, organizations can break up large legacy applications into more flexible and reusable application components. However, most legacy applications are mission-critical to the business and contain many complex business processes. Integrating these applications into an SOA environment has typically required significant specialized skills and knowledge. Verastream Host Integrator 6.6 from Attachmate removes many of these challenges and introduces several new enhancements to simplify the process.

The Verastream Solution Verastream Host Integrator quickly delivers Web- or service-enablement of mainframe information using a graphical tool with point-and-click simplicity. Verastream transforms legacy applications into SOA assets by exposing business processes as Web services, XML, Java, or .NET components that can be mixed, matched, and reused to build composite applications. By enabling mainframe-based legacy applications to participate in today’s SOAs, Verastream Host Integrator helps IT organizations extend legacy functionality to the Web, portals,CustomerRelationshipManagement(CRM),mobile,contact centers, or Web self-service solutions. Since Verastream Host Integrator permits the reuse of existing development skills, familiar IT tools, and proven mainframe investments, the product delivers rapid results. Whether your environment is IBM System z (S/390), IBM iSeries (AS/400), UNIX, OpenVMS, or HP e3000, Verastream can give your users a new look and feel, without disturbing mainframe application code or associated business processes.

Verastream Host Integrator 6.6 The newest release of Verastream Host Integrator—release 6.6—includes several new additions, further enhancing the product’s ability to simplify and speed the participation of legacy applications in SOAs. These include:

•EnhancedModel Import to permit multiple developers to work on a single project or within a single model and the ability to import needed

portions of one project into other projects•Windows2008support•Native.NETClientsupportmakingtheuseofVerastream

Host Integrator transparent with respect to 32- and 64-bit operating systems and future client applications that also are backward-compatible with previous code

•FIPSvalidatedcryptolibrariesthatarerecordedontheNational Institute of Standards and Technology Website

•WS-I-compliantWebservicestoaccommodatetheever-newer standards and capabilities of the Web services specification.

Bottom Line Verastream Host Integrator 6.6 eliminates most of the technical complexity and invasive code surgery associated with typical integration paradigms. Within minutes, organizations can extend the scope of their integration capabilities into the most difficult legacy environments. With its support for standards and a broad array of legacy systems, Verastream Host Integrator 6.6 can play the perfect host for SOA projects. Z

Verastream Host Integrator 6.6 is available from Attachmate Corp., 1500 Dexter Ave. North, Seattle, WA 98109. Voice: 800-872-2829; Website: www.attachmate.com.For a Verastream Host Integrator demo, go to www.attachmate.com/info/verastream-demo/.

verastream’s design tools capture business processes and workflow that are quickly published as reusable services or applications.

Page 30: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

2 8   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

dAvId bOyes, ph.d.

Linux on System z

vendors such as Sun and HP have yet different approaches that (IMHO) don’t make things any easier than we had before. This directly impacts the manageability of the system by dramatically increasing the number of places to look for product authorization and entitlement data, and also makes the auditing process for compliance more difficult by having no consistent place to examine compliance. Linux may be about open source, but the real world needs to have a place to exist and grow the commercial base of tools that can’t fit into that model due to the amount of auxiliary knowledge necessary to operate and maintain a complex enterprise configuration. So, what I’m putting forward to the community is that just as we have a centralized database for package information(maintainedbyRPMorAPT,ifyou’reluckyenough to use Debian), we need a common license management service that can register package licenses and provide one source for registering the licenses that packages use. We need a common way to express different kinds of metering and entitlement. DEC had this for VMS (the LMF tool), and it made life a lot easier. We need to think along the lines of engineered services for this kind of stuff; the next frontier is manageability and measurability, and we’re fast approaching the need for maturity in this area. On a different note, I can’t let Bill Gates’ passing the torch at Microsoft pass without a thanks. Bill, you’ve been the one thing any counterculture movement needs—a face to demonize. On the positive side, you’ve also been a powerful advocate for personal computing and we wouldn’t be having this discussion without you. So, thanks, Bill, we’ll see if Ballmer can take it. They’re pretty big shoes to fill. Last-minute addition: The SLES 10 SP2 starter system appliance is available from Novell. Check it out! Z

About the Author dr. david BoyeS is CTO and president of Sine Nomine Associates. He has participated in operating systems and networking research for more than 20 years, working on design and deployment of systems and voice/data networks worldwide. He has designed scenarios and economic models for systems deployment on many platforms, and is currently involved in design and worldwide deployment of scalable system infrastructure for an extensive set of global customers.Email: [email protected]

It’s seldom wise to write anything important when you have a cold, but the last few days in bed have given me a lot of time to think about several things.

The discussion in the last two columns on the nature of software deployment in virtual machines appears to have generated a fair amount of discussion with a number of ISVs. Contacts from individuals at IBM and other vendors have produced a lot of interesting dialogue about why they don’t currently pursue this approach. Boiled down to an essence, the big issues revolve around licensing issues and how to do license management. Most of the major ISVs are revenue-bound into existing—and often complex—licensing arrangements that reflect individual negotiation and compromise between the ISV and the customer. These agreements often include special provisions and configuration options that address some particular need, and naturally, some benefit the ISV to the tune of a fair number of shekels in the pot that feeds development and enhancement of the product. An appliance-based model makes this type of one-off agreement difficult to construct, as the pieces are subsumed into a more monolithic whole that can be made generic for multiple customers. On the surface, this seems like a winning solution for both ISVs and customers—customers get a fully tested, ready to go product, and ISVs always know exactly what the configuration of the product is. The complexity appears when you try to manage licensing for per-seat or per-user licenses. This gets into a lot of issues about how to manage the license data between upgrades and also—coincidentally—generally leads to a decrease in revenue to the ISV as customers gain better control and visibility over the use and distribution of licenses across the enterprise—to do this, you need to engineer the product to keep the license data separate from the license enforcement data, and provide a well-defined way to see how licenses are being used. Transparency at this level lets the consumer interpret the data in a more convenient way, and lets the consumer make more efficient decisions about usage. Now, the hook for Linux here is with the increasing number of commercial products using a Linux base; the number and type of license managers also is proliferating. IBM has one, CA has a different approach, and other

Poetic License

Page 32: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Capacity management capability is about to accelerate, giving you more than you might be expecting—and sooner.

Technologies such as federation, Service-Oriented Architecture (SOA), Web services, and Web portals are maturing. Best practices are steadily advancing and more companies are paying attention to them. These forces are about to converge to give capacity manage-ment a big boost.

A Long Evolution Capacity management has been evolving since the beginning of the mainframe, with capacity manage-ment databases or performance management databases in one form or another. From the glass house to today’s highly complex envi-ronments, the mainframe ran the majority of business transactions and hosted most business data. It was and is a critical component of the IT infrastructure, albeit expensive: One estimate of cost per MIPS (including hardware, software, and administrative costs) is approx-imately $9,457, according to a Gartner report. So, data center staffs have worked hard to avoid the >

3 0   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

a new Paradigm in Capacity Management

By neil Blagrave

Page 33: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

8 0 0 - 2 8 4 - 3 1 7 2 • 7 8 1 - 2 7 2 - 8 2 0 0 • w w w . b u s t e c h . c o mProduct and company names mentioned in this publication may be trademarks or registered trademarks of their respective companies and are hereby acknowledged.

If Data Security,Data Storageand timely Disaster Recovery are your responsibilities...

You need to know what John knows...

The Bus-Tech Virtual Tape Controller

- Allows you to remotely test your disaster recovery system right at your desktop

- Provides high availability with no single point of failure

- Secures data via encryption

- Improves RTO/RPO with data classification

- Lowers acquisition and ongoing costs

BUS-TECHINC.

Page 34: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

extremes of excess provisioning, which can waste precious capital and operat-ing funds, and insufficient provisioning, which increases the risk of violating Service Level Agreements (SLAs). The more sophisticated staffs proactively manage service levels for critical busi-ness services. The specific methods for doing this have evolved. During the last several years, many companies have wanted to align IT more closely with the business. They’ve paid increasing attention to best practices such as Business Service Management

(BSM), a widely recognized process for gaining the greatest competitive advan-tage and business value from all IT assets. They’ve increasingly sought guidance from the IT Infrastructure Library (ITIL), on which BSM is largely based.

The Pace Accelerates Now the pace is accelerating. ITIL has made a major advance. The Office of Government Commerce of the U.K. Treasury completed a major update in 2007, when it published five texts and a glossary. The refresh, generally referred to in the IT industry as IT Infrastructure Library v3, or ITIL v3, is an important one for capacity management. ITIL v3 includes a service lifecycle

approach to IT service management. It also emphasizes creating business value and focusesmore tightlyonReturnonInvestment(ROI),asopposedtoamerealignment of IT and the business, which v2 emphasized. ITIL says you should integrate your Configuration Management Database (CMDB) with your capacity manage-ment database. ITIL now officially calls the capacity management database a Capacity Management Information System (CMIS). The goal of this inte-gration is to maintain the consistency

and integrity of the data contained in these data stores. The role of the capacity management database in IT service management pro-cesses continues to mature. For exam-ple, Gartner provides this prediction in a recent report: “Performance Management Databases (PMDBs) will emerge in the next five years as a second point of data-level inte-gration. This isn’t an alternative to a Configuration Management Database (CMDB), which focuses on the configura-tion of an IT service for change impact. Rather, it is an additional management database that focuses on managing data needed for the performance management

workflow cycle, which will require federa-tion and reconciliation.” Moreover, “A PMDB must be able to provide data for the entire performance management workflow cycle—real-time analytics, his-torical data analysis, long-term capacity planning and performance tuning.”

These aren’t pipe dreams. The tech-nology exists to build these kinds of tools now. A new paradigm is about to make capacity management a strategic resource and Gartner’s prediction of five years may turn out to have been too conservative.

What’s Soon Possible In the IT industry, technology cur-rently exists or will soon be available to build all the following ITIL-recommend-ed capacity management capabilities: An ITIL-compliant data store: It will relate key business metrics to IT resource performance data and enable the use of this data in capacity and per-formance reports. This will let you pro-cess business metrics as you would any other performance metric. The data store will contain business, service, technical, financial, and resource utili-zation data. It will enable you to manu-ally or automatically input a wide variety of key business metrics. Support of multiple sources: SOA and Web services technology will allow automatic importation and processing of performance data from various data sources. Data integrity: Currently available federation technology can enable bi-directional linking of your CMDB with your capacity management database. This kind of linking will let you maintain the consistency and integrity of the data con-tained in these data stores. Federation lets you avoid the need to simulate disparate data stores. You keep the data where it belongs and access it as needed. The fed-eration enhances BSM applications. For example, when a hardware change is planned for a component in the IT infra-structure, the change management pro-cess will have immediate access to historical performance data for that com-ponent, and the potential impact on busi-ness services of a proposed change to the IT infrastructure can then be assessed using more complete information. Trending capabilities: Data center staffs will be able to quickly conduct multiple and iterative what-if scenarios. The system will automatically pre-pop-ulate screens with selected historical data and metrics for you to work with. End-to-end performance monitor-

3 2   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

ITIL v3 includes a service lifecycle approach to IT service management.

Page 35: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

ZJOURNAL 8 X 10.75

08047_FDRERASE_zJournalad.qxd APR/MAY 2008 ISSUE

CORPORATE HEADQUARTERS: 275 Paterson Ave., Little Falls, NJ 07424 • (973) 890-7300 • Fax: (973) 890-7147E-mail: [email protected][email protected] • http:/ / www.innovationdp.fdr.com

EUROPEAN FRANCE GERMANY NETHERLANDS UNITED KINGDOM NORDIC COUNTRIESOFFICES: 01-49-69-94-02 089-489-0210 036-534-1660 0208-905-1266 +31-36-534-1660

FDRERASE/OPEN Earns International CCEVS EAL2+ Certification

FDRERASE for z/OS & FDRERASE/OPENFDRERASE is EAL2+ CERTIFIED and meets the government standards for Purging and Sanitizing disks.

USER EXPERIENCE: FDRERASE ERASES 3 TB IN LESS THAN 2 HOURS!

One FREE Trial Use of FDRERASE for up to 3 TB of data within 30 days. For more information contact INNOVATION, call: 973-890-7300 or e-mail: [email protected]

See FDRERASE’s benefits, view the Product Demo online at: http://www.fdr.com/demo.cfm

Page 36: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

ingandreporting: You will be able to incorporate both mainframe and distrib-uted metrics of a business service on the same pane of glass in the same report. Dynamic thresholds: In many data centers, the capacity management data-base contains a wealth of information, accumulated over many years, about good thresholds for certain metrics on mainframe-based applications. It will be possible to tie this information to moni-toring; that is, specify intelligent dynam-ic thresholds for alarming and alerting. You will have access to any kind of report you may need. Web services tech-nology also will enable an unlimited variety of reporting; you choose the busi-ness metrics, report format, and scope.

SOA Is the Right Technology SOA, along with Web services, will provide the most flexibility for ITIL-compliant capacity management. For example, SOA will allow great flexibility for delivering service enhancements, which are likely to be frequent and granular. New services can be created more quickly and more easily than by embedding or updating legacy code. SOA means scalability. Keeping up with expanding needs and growth will be easier with SOA. New services that address changing business requirements can be created and added to an existing list or catalog of services and made available to consumers in a timely fash-ion. It will be easier to create home-grown services layered on top of core services (such as a data trending ser-vice) to suit your own requirements. SOA also will enhance openness. Vendors can grant easy access to data and core services, while maintaining data integrity. SOA also will facilitate the use of options such as plug-ins and unique services for widespread or spe-cific needs. Web services and portals take the openness a step further, giving compa-nies a well-known, accepted, easy-to-use, secure technology to access whatever they need—a technology easi-ly extended with specialty and purpose-built tools for capacity management-specific needs.

Business Benefits Most data center managers have always understood that a capacity man-agement process, including performance data, is essential to helping them manage their mainframe infrastructure costs, as well as proactively manage service levels for critical business services. The new

paradigm of capacity management, com-ing soon, will let them do both more effectively. They will be able to proactive-ly determine how much and when addi-tional IT resources are needed. In a large data center, this intelligent rightsizing can save millions of dollars annually, making anidentifiableROIimprovement. Data center staffs can eliminate wasted resources caused by excess pro-visioning, saving significant capital out-lays. Frequently, they will be able to defer planned upgrades, with the finan-cial benefit of buying less expensive technology later, and saving money dur-ing the deferral period. Many will be able to avoid capital expenditures through tuning and balancing work-loads across existing resources. Even with the capabilities of new z10 Enterprise Class hardware to provide some capacity expansion, organizations will still need to do effective capacity management to ensure they’re driving the lowest Total Cost of Ownership (TCO) for the mainframe. There are other kinds of savings, too, such as the inherent savings in an improved change management process as described earlier. Any BSM-enabled application, such as asset management or help desk, is enhanced by direct access to accurate performance data directly related to the components to be managed—a key tenet of ITIL v3. From an asset management perspec-tive, you could create asset reports that include the history of an asset’s utiliza-tion over time or show what business services the asset supports based on which workloads or applications in the capacity management database use that asset. This additional information pro-vides for a more complete profile of the assets you’re managing. From a help desk perspective, ser-vice desk trouble ticket reports could be enhanced to include performance data to help you faster assess, categorize, and prioritize incidents. Major savings are possible when you take a broader perspective to capacity management. You can now perform capacity planning for your mainframe with an integrated, enterprisewide view because you have access to data and rela-tionships with that data that are accessi-ble through the federated CMDB. This can lead to improved service levels. With the ability to perform capacity management from a holistic perspec-tive, you can make decisions that will help produce business value. For exam-ple, what’s the value of delivering ser-

vice levels business owners expect? How much better positioned might a compa-ny be in its market if IT can eliminate some service delivery costs? What is the value of being the first company to take a new service to market? Or, having a new market initiative succeed because IT had the correct resources in place to support that new initiative on day one? ITIL-compliant performance man-agement also helps overcome the silo focus of many IT shops, whereby the CICS analyst focuses on CICS and the DB2 analyst on DB2, and so on. This problem is harder to quantify, but surely is expensive. It can be overcome by cor-relating transaction workloads across mainframe subsystems and enabling an end-to-end or business service perspec-tive to performance reporting and capacity management. We’ve all been involved in situations where technical specialists and managers are brought together to resolve a business service performance issue. The finger-pointing that typically ensues is time-consuming, expensive, and does little to solve the problem. Correlation of transaction components enables technical special-ists to more easily isolate, diagnose, and fix the technical issue that’s affecting a particular business service. So, you can bring everything together across the data center. You can bring IT and the business closer together and pro-vide the ability to assess IT resources from a business-activity perspective. You can have a robust business end-to-end view.

A Golden Opportunity Soon, when you have in place the kinds of capabilities discussed here, you will have a golden opportunity to approach your colleagues on the busi-ness side: the business owners. You will be able to offer them some sophisticat-ed, proactive types of analysis. At long last, you will be able to show them how you can help produce business value. You will be able to open a new conver-sation with your colleagues. Find out what’s important to them. They have the business metrics; you know how to include them in the database and capac-ity management processes. Your data center will move up the maturity curve in a big jump. Capacity management will become a strategic tool at last. Z

About the Author neiL BLaGrave is a capacity management product strategist in the Mainframe Service Management business unit of BMC Software.Email: [email protected]

3 4   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Page 37: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

December 7-12, 2008 | Paris Hotel and Casino | Las Vegas, Nevada | www.cmg.org

The Association for System Performance Professionals

Are you ready to deploy virtualization…service

oriented infrastructures…and green IT in your

organization? Come to CMG ’08 to discover how

forward-thinking IT leaders are architecting IT service

management best practices.

You can choose from more than 150 technical,

tutorial, panel, and management sessions. Plus you’ll

have an unparalleled opportunity to network…

learn about new products and services…and share

insights and experiences with your peers.

Mark Your Calendar!CMG ’08 comes to Las Vegas December 7-12th.

Register today at www.cmg.org.

CMG ’08 | Your IT Service Management Powerhouse

The HottestTracks…Topics…andTrendsettersOur conference agenda reaches across today’shottest topics and late-breaking technologies:

� Virtualization

� Performance

� Visualization

� Service Oriented Architectures

� Capacity on Demand

� z/OS, z/VM, & z/Linux

� Software Licensing

� Planning for Green Data Centers

� ITIL & IT Service Management

� Web Services

� Web 2.0

� Forecasting and Modeling

� Unix, Linux, and Windows

� Business Performance Management

Learn how today’s leadingorganizations are leveraging

IT service management toachieve business value.

CMG AD zJournal March 3-24-08:Layout 1 3/24/08 2:40 PM Page 1

Page 38: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

It should come as no surprise that IT is in place solely to support the busi-ness. There was a time in our past

when you may have been forgiven for thinking it was the other way around. To the cynical, aligning IT to the busi-ness may sound cliché, but that doesn’t make it any less important. When you look at the network, understanding how it relates to the business services it sup-ports is paramount to providing a pre-mium network service to your business colleagues. Pick a stack, any stack, in your sys-tem and chances are it’s processing an enormous number of IP packets per second. Many network professionals take great delight in quoting high throughput numbers and espousing the efficiency of their network imple-mentation. But take a closer look. Is each and every packet the same? Perhaps this sounds like a dumb ques-tion. Of course, they aren’t the same, but usually you’re treating them with equal importance and relevance from

a network management perspective.

Traffic Characteristics Repeatafterme:“Allnetworktrafficis not created equal.” There are many characteristics of network traffic that can be used to obtain an indication of how important each packet is to the business. Consider these characteristics:

•Application job name: Knowing the mainframe application (address space) associated with the traffic often yields an indication of the business purpose behind the traffic. For example, all traffic for a certain production CICS region may be known to be associated with front-office bank processing.

•Local port: Knowing the local port (i.e., the mainframe stack port) associ-ated with the traffic, especially with well-known ports, also will provide a clear indication of the business pur-pose behind the traffic. For example, all traffic going to ports 20 and 21 is traffic associated with File Transfer

Protocol (FTP) transfers.•Remotenetworkaddress: When you

know where traffic is coming from, you often have a good indication of the processing purpose. For example, knowing the remote network address and having a sound knowledge of your network setup may help you relate the traffic to a particular geo-graphic location, department location, business partner, or customer.

•Remote port: Knowing the remote port also can clarify the traffic’s pur-pose.

•Protocol: Knowing the IP protocol associated with the traffic can provide relevant information. For example, is it Transmission Control Protocol (TCP), User Datagram Protocol (UDP), or Internet Control Message Protocol (ICMP)? For most strategic processing, it will likely be TCP, but UDP may be specific to certain busi-ness applications, such as those exploiting mainframe Enterprise Extender capabilities.

3 6   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Making Business Sense of your network Traffic

Page 39: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Benefits and Examples

Knowing the charac-teristics of the network

traffic can yield a good indi-cation of its business purpose.

Now, extend that thinking to what you might be able to determine based on knowing two or more of these charac-teristics. Let’s do this by considering some simple examples. In Figure 1, you know the applica-tion names and remote network addresses. The CICS regions are pro-duction regions used for insurance poli-cyprocessing.Remoteaddressesareallassociated with a remote sub-network in New York City. You can now identify this traffic as a business application called CICS NYC. In Figure 2, you know the applica-tion name, local ports, and remote network address. The application is FTP, and based on the ports used, it can be identified as non-secured FTP. The remote address is associated with a

remote sub-network used by an impor-tant business partner. You now can identify this traffic as a business appli-cation called FTP Partner X.

Now, let’s assume you can measure network performance in this way. So, where’s the meat? It’s nice to know what business application or services are gen-

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   3 7

Figure 1: Business application cicS new york city

By Warren Jones

Page 40: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

erating IP network traffic, but how can you use this information to better man-age your network and IP infrastructure? Consider what you can do given a better understanding of network utilization: Real-timeperformancemonitoringbased on business application: At any point, you would be able to understand the activity level the business application is generating. In particular, you could examine throughput (bytes in and out, number of active connections, number of total connections over a set sample period). If you had a particularly busy application, you would be able to see at a

glance whether it was generating an acceptable amount of activity. For exam-ple, you get a call from your colleagues in NYC (referring back to Figure 1) say-ing they’re experiencing problems with their connections to CICS regions. Under normal circumstances, you would check to see if there are active connec-tions to CICS and that there’s some level of IP activity to the stack and maybe even CICS, but have no real way of easily knowing whether this specific business service is experiencing acceptable per-formance. Having the granularity of data to see network traffic at a business appli-

cation level let’s you quickly determine whether a significant issue exists. Let’s get a bit more proactive in your management. You know this business application typically generates consider-able traffic, so why don’t you alert based on a significant deviation from normal? No activity, or minimal activity, for what’s normally a busy business service should be immediately alerted on and investigated. Conversely, an overly high reading for something such as the num-ber of active connections may suggest a problem, such as connections not being successfully closed. If you can deter-mine baseline performance data, this can provide extremely meaningful val-ues to use for your alert thresholds. Historical performance monitoringbased on business application: On a weekly, monthly or ad hoc basis, you can examine a given business application’s activity level (see Figure 3). You also can easily compare this to traffic from other business applications and track traffic as a percentage of overall stack activity. This can help identify trends and facilitate planning for network growth. An appli-cation may initially generate only a small amount of activity and be placed on a Logical Partition (LPAR) that’s alreadyquite active in terms of IP activity. Over time, monitoring may reveal steadily increasing activity, which may lead you to examine options for shifting workload to assure continued provision of accept-able service.

3 8   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Figure 2: Business application FtP From Partner X

Figure 3: example of historical reporting Based on Business application

Page 41: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Consider your FTP Partner X appli-cation. As your relationship with your business partner expands, you might find that the number and size of FTP activity being generated dramatically increases. This might not affect delivery of data from your partner, but it may be detri-mental to other network services sharing thesameresources.Redirectingthenet-work traffic from the partner to another LPAR may be all that’s required toimprove network performance for all. Historical event monitoring basedon business application: For a given application, there’ll be numerous events associated with it. Most will be connec-tion-related, such as connection initia-tion, termination, and failure. Being able to relate these events to a business ser-vice provides a simple way to filter the events to just those you’re interested in. Let’s say you receive a call from Partner X complaining that their FTP transfers aren’t getting through. If you can filter all FTP events with the addi-tional information of business applica-tion, this lets you just see transfers associated with this partner and quickly determine the problem’s cause.

Conclusion You can see the benefits of being able to view network performance in a more business-oriented way. The next

step is to look for network manage-ment solutions that lend them-selves to this p a r a d i g m . To d ay, m o s t mainframe IP management products let you look at traffic associ-ated with a loca l por t and ass ist you in bet-ter under-s t a n d i n g the busi-ness rele-vance of your net-w o r k t r a f f i c .

It’s a great first step, but only a first step. Network management products can take you further by supporting a closer alignment of the network to the busi-ness. Some vendors see this need and are addressing the challenge. To effectively manage IP network con-nectivity to the mainframe, you need to know how it relates to the business ser-vices it supports. Looking at network traffic as a whole will provide only limited assurance of the well-being of the net-work. By ensuring your network manage-

ment tools can provide granular data, such that you understand the relevance of network traffic in terms of business appli-cations, you’re best positioned to align IT to the business and reward your business colleagues for their faith in IT. Z

About the Author Warren JoneS is a product manager at CA, responsible for CA’s z/OS network management solutions. He has more than 20 years of IT industry experience in a variety of roles and organizations with a strong focus on mainframe management. Email: [email protected]

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   3 9

ferret

Call (877) 723-0008 or (703) 674-2200or visit www.willdata.com/ferret

Have you got a headachemanaging your APPN/HPR

or EE network becauseyour network monitorwon’t give you what

you need?

Manage the complexities of your APPN/HPR EE network with:

Dynamic detection of APPN/HPR and EE configurationsAPPN/HPR and EE operation monitoring and alertingAPPN/HPR and EE performance monitoring and alertingHistorical data for capacity planning and reportingProblem determination and performance analysis tools

Ferret ad_zjournal_aw:Ferret ad_zjournal_new 17/3/08 10:48 Page 1

Page 42: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

4 0   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Today, many z/VSE installations are using some form of internalRAID5SCSIarchitecturedisktoemulate3390and 3380 drives for data storage. Disk emulation of 3390

and 3380 drives is a well-accepted technology in the mainframe community, as it provides a fast, reliable, inexpensive way to store mainframe data. Please note that z/OS and z/VM installations using this technology also may be affected by what is discussed here. A simple single drive failure generally doesn’t cause great concern.Thesingledriveisreplaced,theinternalRAID5recovery capability recovers the data, and processing continues. This sounds like a great technology and a great solution, so what’s the problem? WhathappensiftwodrivesinthesameinternalRAID5array fail at the same time? Or, what if after the first failure another drive fails while recovery is in process? Often, users and suppliers cite the improbability of two drives in the same array failing at the same time as being so unlikely that it doesn’t warrant any concern. Well, new factors have entered the picture that must be considered. In the past 10 years, I’ve personally seen simultaneous multipledrivefailuresinoneRAID5arrayatthreedifferentsites:

Site1: This site experienced a single drive failure and then another drive failed before recovery was completed. This caused a major data loss because the installation relied on RAID5andneveranticipatedamultipledrivefailure.Needless to say, that policy immediately changed.

Site2: This site experienced multiple drive failures within minutes of each other. Backups were used to recover data and processing continued after a several-hour delay.

Site3: This failure occurred earlier this year and is somewhat different from the other two and illustrates that time changes exposure and affects failures.

In this case, a large installation running multiple z/VSE LPARshadplannedamaintenanceshutdownfortheweekend to have a CPU board and battery replaced. The expectation was a simple power down, battery replacement, diagnostic test, power up, and resume processing. No special plans or precautions were indicated or taken. Note that this CPU disk environment had been running for three to five years without interruption, with only occasional Initial Program Loads (IPLs) and power on resets. To the best knowledge of the team, at no time had electrical power to the machine been turned off. The

installation had experienced one or two single disk drive failures, and the drives were replaced and recovered without incident. The board/battery replacement and diagnostics took about an hour and a half, which was longer than anticipated. Upon start-up, one disk in one array failed to power up; it was replaced. On the second attempt, another disk in the same array failed to power up. Before it was over, three disks in the same array failed to power up and were replaced. The user lost one-third of its z/VSE volumes, 36 emulated 3390s, and the data. However, they were able to initialize, restore, reload, and re-create the 36 volumes using stand-alone initializes and restores, online restores, copies, and creative thinking. They were extremely fortunate that most of the 36 drives were utilized by their test system, not production. The test system was unavailable for approximately one week while the recovery was implemented. Loss of data was minimal and didn’t impact production in any way. Would your backups and disaster plans enable your installation to recover from this type of major failure? If not, it’s time to ensure that you are protected. WhileRAID5multipledrivefailuresmayappeartobearare occurrence, they are not as rare as most would believe. Also, a new potential failure has been identified. For example, it appears SCSI drives that have been powered up and running for years may have problems if shut down and allowed to cool. Upon restart, the drive actuator may stick via friction, suction, or some unknown factor. We currently don’t know how long they can be shut down or how cool they need to be. We suspect that users with a tested, workable disaster recovery plan would be able to recover from this type of failure with only a minor loss of data. Users that just do backups with no recovery plans must immediately put in place a plan to recover from this type of failure. This failure appears to become more prevalent as the hardware ages. Based on Site 3, the best course of action would be to do a complete backup before any power off (extended or not) is done and have in place a procedure to restore/recover from any type of disk failure. Thanks for reading the column; see you all in the next issue. Z

About the AuthorPete cLarK works for CPR Systems and has spent more than 40 years working with VSE/ESA in education, operations, programming, and technical support positions. It is his privilege to be associated with the best operating system, best users, and best support people in the computer industry. Email: [email protected]

Limiting your Stored Mainframe Data risks

Pete Clark on z/VSEpete CLARk

Page 43: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

There’s an easy way to go back in time...minus the plutonium.

With illustro’s z/IPMon™, the only full function performance and problem manager for

your critical VSE TCP/IP systems, you can finally look back and replay your past TCP/IP

activity with our exclusive Retrospect™ feature. Simply change the date and time fields

to a previous point in time, and z/IPMon will reload and display the data as it occurred.

So when your boss wants to know what happened, you can go back in time and tell

him, without buying a Delorean!

Visit zipmon.com/timemachine to watch a video demo and See What You’ve Been Missing™.

z/IPMon’s Retrospect™feature allows you to

view past data.

Copyright © 2008 illustro Systems International, LLC. All Rights Reserved.All trademarks referenced herein are trademarks of their original companies. toll-free U.S. and Canada: 866.4.illustro (866.445.5878) • phone: +1.214.800.8900 • illustro.com

Page 44: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

4 2   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

recent months addressed some important principles, but the net result was that one of the most exciting offerings available in the entry-level market was obstructed by legal wrangling. Hopefully, the takeover announcement means IBM will be heavily investing in the PSI technology and building it into the lower end of the System z product range—all good news for both companies and for customer choice. Not such good news for the lawyers, though, as both players are now dropping their respective claims!

Around the Vendors Mainframe software stalwart Software AG has had a busy month with various initiatives. Following a joint research project into “green” Service-Oriented Architectures with the German Hasso Plattner Institute, the two partners have developed a number of policies for the CentraSite SOA lifecycle management product, which will allow administrators to predict and manage the amount of compute power and energy needed for each service within an SOA environment. Meanwhile, SAG has announced an SQL Gateway for its Natural programming language to allow easier access to SQL databases on distributed platforms, including Oracle, DB2, Sybase, and SQL Server. For its popular legacy database system Adabas, SAG has introducedanEventReplicatortoprovidereal-timereplication of Adabas data across databases running on open systems. CA announced Version 11.6 of OPS/MVS Event Management and Automation. This release includes a Switch Operations Facility, which allows systems administrators to visualize, monitor, and manage highly complex ESCON and FICON environments. BMC Software released a new version of Performance Assurance for Mainframes. The product is aimed at helping users more effectively manage their software usage and costs (a subject that provides almost limitless scope for improvement in many companies), and now includes virtualization features to allow planners to model Linux workloads on the mainframe. Z

About the AuthormarK LiLLycroP is CEO of Arcati Research, a U.K.-based analyst company focusing on enterprise data center management and security issues. He was formerly director of research for mainframe information provider, Xephon, and editor of the Insight IS journal for 15 years. Email: [email protected]; Website: www.arcati.com/ml.html

Recently, I noticed a young colleague engrossed in one of my z/Journal columns. Naturally a little flattered by her interest in my writing, I asked her if she had any

questions. “Only one,” she answered with a cheeky smile, “How many years ago was that photograph taken?” I confess I was a little taken aback. I stared at the small picture at the top of the column. “Well, it wasn’t that long ago,” I responded. I studied the background in an attempt to date the image. “It would be, well, er, now let me see …” Gradually, it dawned on me that the photo was at least 10 years old, a fact that was no doubt as clear as day to the questioner. She was now peering closely at me, observing the tell-tale wrinkles, the gray hair around the temples, and other signs that youth was finally giving way to distinction. I sighed and assured her I would find a more recent photo for the magazine (next issue, I promise!). Funny thing, though. When you look at the same face in the mirror day after day, you just don’t notice those subtle changes. It’s only when you stand back and compare the old image with the new that you realize how much has changed in the intervening decade. I guess it’s much the same with the mainframe. The z10 EC is quite different from its predecessors of 10 or 15 years ago. As a platform for new, Java-based applications, a focus for consolidation for distributed Linux resources, and—arguably the most important benefit of all now—the champion of “green” IT owing to its highly efficient use of power resources, the System z is a real 21st century platform. But because each successive generation of hardware has offered such a smooth upgrade path from the previous one, maybe we just haven’t noticed that impressive metamorphosis taking place. OK, there are still a few wrinkles—let’s not get too excited about legacy integration or software pricing just yet—but the z10 really is a modern platform, and is finally gaining recognition among influential players. Companies such as SAP, Oracle, and Sun Microsystems recognize that the mainframe’s unique architecture is open to a growing range of companies seeking flexible scalability beyond their current systems.

If You Can’t Beat Them, Join Them! I was pleased to see the recent announcement that IBM is acquiring Platform Solutions Inc., the Sunnyvale, CA, firm that has developed highly innovative, multi-operating system solutions for smaller mainframe users. The lawsuits and countersuits flying between the two companies in

The Familiar Face in the MirrorMARk LILLyCROp

z/Vendor Watch

Page 45: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   4 3

By Denny Yost

Today’s Fortune 1000 organizations rely on mission-critical, mainframe-based applications to support their

core business operations every day and around the clock. Achieving a high level of application dependability requires significant and expensive man-hours spent performing quality testing and maintaining all file type data. FileMarvel by CSI International was created by programmers for programmers to provide a tool that helps speed the testing of applications and more easily manage files and the data stored within them. Based on a similar look and feel of ISPF, FileMarvel provides applications and data management professionals with an easy-to-learn, menu-driven interface to its many online functions, while also providing an extensive batch facility. A comprehensive online help facility, quick reference command displays, and online demonstrations assist in a fast learning curve for immediate use of the product.

Speeding Applications Testing and Quality When applications are tested to determine if input and output are being processed correctly, IT staffs need real data to use. While it is possible to use tools such as “IDCAMS” to select and copy data, it is cumbersome to use and takes time to learn its unique commands. FileMarvel, on the other hand, makes selecting and copying the required data files very easy to accomplish through its intuitive menu-driven displays. IT staff can quickly test applications and identify production problems through the use of FileMarvel’s extensive functionality. FileMarvel provides IT staff with interactive, menu-driven access for online editing and viewing of VSAM, PDS, PDSE, sequential disk, and tape data sets. It is this powerful functionality combined with FileMarvel’s easy-to-use, ISPF-like menus that makes maintaining any data set quick and effortless. COBOL and PL/I copybook support makes finding or following an error a straightforward task. Displays and reports can be provided in copybook format that include only the fields needed. The copybooks can reside in a PDS, CA Librarian, or CA Panvalet. Six different display modes—standard, hexadecimal, hexadecimal dump, formatted, compact, and record reformat—are provided to ease editing. In addition, displays can be laid out to match a COBOL or

FileMarvel by CSI InternationalFile and Data Management Solutions Help Increase Testing Speed and Quickly Resolve Production Problems

PL/I copy member. FileMarvel also provides the ability to flip back and forth between the different displays.

Easing File and Data Management Keeping application availability at its highest level requires constant file and data management. Many FileMarvel functions that help programmers test applications can be used to ease the task of conducting file and data management. For example, any type of data set—VSAM, SAM, BDAM, JCL, etc.—can be browsed, edited, copied, sorted, merged, printed, allocated, and have its layouts changed, which is useful during testing and managing data. FileMarvel can be used to allocate VSAM data sets based on model definitions, eliminating the need to remember or use IDCAMS commands. Searching an entire PDS to locate all occurrences of a specific set of

search criteria or perform a global search-and-replace operation is quick and easy. Frequently used commands can be set up and saved for recall, saving time and preventing accidental errors. Furthermore, FileMarvel can edit files larger than can fit in core.

Lowering Costs Cost-effective access to tools that help IT staff easily, effectively, and efficiently conduct testing and manage data greatly aid in faster testing and production problem resolution. FileMarvel by CSI International is an excellent low-cost alternative to other z/OS file management products such as File-AID from Compuware and File Manager from IBM. Z

For more information, contact CSI International, 8120 State Route 138, Williamsport, OH 43164-9767. Voice: 740-420-5400; Email: [email protected]; Website: www.CSI-International.com.

Page 46: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

The latest version of CICS Transaction Server, Version 3.2, enhanced the support provided

for Web services in several ways. One improvement added support for Message Transmission Optimization Mechanism (MTOM) and XML Optimized Packaging (XOP). This arti-cle explains what this support means and the benefits it can have in CICS regions dealing with certain types of SOAP messages.

What’s the Problem? To appreciate the benefits of using MTOM and XOP, it’s necessary to under-stand what problem is being addressed. SOAP messages are a particular syn-tax of XML messages and are the de facto standard for Web services. The contents of a SOAP message must con-tain only printable characters. Usually, this restriction isn’t a problem, but what if it’s required to send an image file or some other piece of binary data as part of the message? A binary image or other binary data will contain bytes that aren’t in the printable character range of byte values. So they can’t be included directly into a SOAP message. The standard way of circumventing this problem is to define in the Web

Services Description Language (WSDL) description of the message that part of the message that needs to contain the binary data as being of data type base-64Binary. The binary data will need to be converted to base64Binary format for inclusion into the SOAP body part of the message and for transmission across the communications transport. The sys-tem in receipt of the message needs to convert the object from base64Binary back into the raw binary format. Because each byte of a base64Binary encoded item will be a printable charac-ter, base64Binary solves the problem. This is possible because each 6 bits of the original binary object become a whole byte, 8 bits, when converted. By using only 6 bits in a byte, the printable nature of the resulting byte is guaran-teed. It can then safely be inserted into a SOAP body without violating the SOAP specification. Figure 1 shows schematically a bina-ry object, a picture, included in a SOAP message. However, although this will work, there are obvious problems with this solution. One problem with the use of base64Binary is that conversion to and from this format is required. The sender of the message needs to convert from the initial binary format and

include the result into the SOAP mes-sage. The receiver then needs to per-form the reverse processing to recover the original binary. This needs CPU at both ends of the transport. Another problem is that the binary object will grow when it’s converted to base64Binary. Each 6 bits of the original binary object become 8 in the converted format. This means the object to be included in the SOAP message and transmitted has grown in size by 33 per-cent. It’s likely that binary objects are quite large anyway, being a picture or substantial chunk of binary data from a database. Making the large object 33 percent larger will clearly impact net-work transmission time. Another performance problem introduced by including binary objects in the SOAP body of the message is the time required for the system receiving the message to parse it. An XML parser will scan the incoming message to locate the various headers that may be present and determine the contents of the body. If the SOAP body contains a large bina-ry object (e.g., 1MB), the parser won’t know that. It must parse through 1MB of data where it will find no XML tags. This will clearly have a large impact on the time and CPU required for parsing.

4 4   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

B y D a r r e n B e a r D , P h . D .

MaximizeWeb Services Performance

Using MTOM/XOP

Support in CICS Transaction Server V3.2

Page 47: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Since merely encoding the data isn’t a good solution to having binary objects and SOAP messages, an alternative was required. That alternative is defined by the MTOM and XOP standards.

MTOM and XOP MTOM defines a method of separat-ing binary objects from SOAP messages and sending them separately with the whole message using MIME packets. XOP defines an extension to the SOAP syntax to designate where binary object(s) are to appear in a particular SOAP message. It’s possible to use MTOM and XOP separately, but for Web services, both would generally be used. When both are used, it’s usually notated as MTOM/XOP. MTOM/XOP has evolved from what used to be known as SOAP with attachments. There’s no architected way for a Web service requester to determine whether a given Web service provider supports MTOM/XOP messages. If a system doesn’t understand them, an error will be returned, probably in the form of a SOAP fault. CICS as a Web service pro-vider can be configured to respond to

Web service requesters in several ways. It can always use MTOM/XOP, never use it, or use it only if the incoming message used it. The configuration is at a CICS pipeline level of granularity and is achieved by using the pipeline con-figuration file. New elements can now be coded in a pipeline configuration file to specify whether to use MTOM or XOP. For more information, see the CICS Transaction Server V3.2 infocenter. Whether XOP processing actually occurs for a given message depends on the specific message and the pipeline configuration. If the message contains fewer than 1,500 bytes of binary data, then MTOM/XOP isn’t used. This is because there’s a performance cost to building the MIME packets and if the message is smaller than 1,500 bytes, this cost is higher than the cost of base64Bi-nary encoding the bytes instead. If the message is larger than 1,500 bytes, there’s a benefit to using MTOM/XOP. This performance crossover point was determined during the development of CICS TS V3.2 and is implemented in the system code independently of the pipeline configuration file specification.

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   4 5

Figure 1: Binary object as Part of a SoaP messagethis message will be base64Binary encoded.

Page 48: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Using MTOM/XOP When MTOM/XOP is used, the SOAP message is changed from the form shown in Figure 1 to that in Figure 2. As shown in Figure 2, MTOM/XOP overcomes the problems outlined con-cerning base64Binary encoding of binary objects. The binary object remains in the original binary format. Neither the

requester nor provider system needs to perform data conversion on the binary object. Since the object isn’t expanded by 33 percent before transmission, the net-work transmission time for the message is kept smaller. Also, by having the bina-ry object separate from the SOAP body and merely referenced from within it, the message parse time is greatly reduced on the receiving system. Instead of the pars-er having to scan through a large chunk of the message and never finding an XML tag, it instead finds an XOP include tag, which can be quickly processed. Let’s now consider the performance benefits of using MTOM/XOP, as reflected in CPU time required to pro-cess messages containing various amounts of binary data from 1 byte to 1MB. For CICS TS V3.1, the data always had to be base64Binary encoded since MTOM/XOP isn’t supported on that release. Figure 3 shows that the perfor-mance scales linearly with message size to reach about 225 milliseconds (ms) for a 1MB binary object. With CICS TS V3.2, measurements were made both with and without MTOM/XOP. Without MTOM/XOP, per-formance again scales linearly but only up to about 110 ms for a 1MB message. The real benefit is realized when MTOM/XOP is used. In this case, the CPU per message is only about 12 ms for a 1MB message.

Summary CICS TS V3.2 at the GA level of code supports MTOM/XOP over HTTP. If PTF UK33380 is applied, support also is available over IBM WebSphere MQ. The support is straightforward to con-figure and provides significant perfor-mance benefits if SOAP messages containing binary data are sent and received as part of a Web services inter-action. Performance benefits accrue from not having to perform base64Bi-nary conversions, reduced network transmission times, and reduced mes-sage parsing times. If Web services implemented on CICS use binary data, then CICS TS V3.2 exploiting the MTOM/XOP support is the ideal plat-form on which to deploy them. Z

About the Author dr. darren Beard works in CICS development at IBM United Kingdom Ltd. in Hursley, UK. He has more than 18 years of experience and has been a lead developer on a number of items in CICS TS, including the Web services support. He has written several articles on IBM’s software and presented at the U.S. and European technical conferences and at Nordic GUIDE. Email: [email protected]

4 6   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Figure 2: Binary object Separate From the SoaP message

Figure 3: Performance data

MTOM/XOP

has evolved

from what

used to be

known as

SOaP with

attachments.

Page 49: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Can your critical application D/R software do this?

Cross application datasetrequired

Filtering only Critical datasetsfor Actuarial applications

Critical – first input toPAC1740M

Cross-application dataset

OpenTech Systems introduces the DR/Xpert® GUI, the first Java-based browser interface forcritical identification and recovery of mainframe application data.

� Drill down through your applications, jobs and datasets using a standard browser.

� Quickly identify critical data and the reason it is critical.

� Validate your recovery objectives using the Recovery Simulator.

Learn why some users of DR/VFI and DR Manager have switched to DR/Xpert.

1-800-460-3011 www.opentechsystems.com [email protected]

© 2007 OpenTech Systems, Inc. All rights reserved. DR/VFI is a registered trademark of 21st Century Software. DR Manager is a trademark of Softek Storage Solutions Corp. Java is a trademarkor registered trademark of Sun Microsystems, Inc.

Page 50: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

4 8   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

By Susan Lawson & Dan Luksetich going Back

In Time

Page 51: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

you may wish you could go back and experience life at a particular point in time. While there isn’t a time machine just yet, there is a way to create one for your data.

You can create a way to view “as of ” data using DB2 tables to image your data before and after changes occurred and then be able to see your data at a point in time, or see how your data has changed over time. You can do this while also being able to back out corrupted data changes without a major outage. It may take a bit of design work on both the database and application, but ultimately, you can provide instant access to “as of ” data.

Auditing Table Changes To produce the “as of ” data images there must be a place in the database where data that has changed will be stored. There are many ways to store changes to data; establishing audit tables is one of the most common. The audit table’s major purpose is to record images of rows of data as they existed before or after a change to the row. An audit table has the same design as its corresponding base table except for an additional key field—typically a timestamp field to be able to store multiple versions of a row. Figure 1 shows that the AUDIT_TABLE con-tains the same data as the MAIN_TABLE, >

going Back In Time

how to Leverage Data Imaging for DB2 Tables

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   4 9

Page 52: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

but the AUDIT_TABLE contains a start timestamp and end timestamp vs. the MAIN_TABLE update timestamp. These timestamps represent times in which the data stored in the MAIN_TABLE was active. Whenever a row is updated or deleted in the MAIN_TABLE, a row is written to the AUDIT_TABLE to record the period of time in which the image of the row was active before the change was made. So, in Figure 2, the MAIN_TABLE reflects the current condition of the data, and the AUDIT_TABLE holds the data that was active for the range, using the before image of the update timestamp as the start timestamp and the after image as the end timestamp.

Deletes are typically similarly recorded; inserts aren’t typically recorded, but can be.

Automated Data Replication Population of audit tables can be a programmatic or automated process. Typically, an automated process is pre-ferred because there’s little or no appli-cation programming involved. An automated audit process also lets you turn the process on or off without appli-cation programming changes. Synchronous or asynchronous repli-cation are the two primary ways of moving data from the main table to the audit table: Asynchronous replication: The

most common way to capture audit information is via a log analysis prod-uct. Several available products will read log records and create either SQL state-ments to insert data into the audit table or load-ready records in a flat file that can be loaded via a LOAD utility. Use of these tools usually requires that DATA CAPTURECHANGESbesetonforallthe tables to be audited. This feature lets DB2 capture full row images for replica-tion in the DB2 log, as opposed to par-tial images if log records are written for backout and recovery only. Having an asynchronous replication process can be beneficial because the applications that update the data don’t incur the overhead involved with writ-ing directly to the audit tables, and there’s no additional dependency of the audit tables on the applications that per-form updates. The asynchronous pro-cess does have a drawback, however, in that replicated data isn’t always immedi-ately available. Sy n c h ronou s r e p l i c a t i on :Synchronous replication to the audit table typically involves use of DB2 triggers. A trigger is a piece of applica-tion code, written in SQL, which can be attached to a table and activated by an insert, update, or delete against the table. Triggers are a completely syn-chronous process that adds a code path to the statements issued against the table on which the trigger is defined. So, in replicating changes to an audit table, the trigger code would insert a before image of the data to the audit table. This would, of course, increase the cost of the application making the data changes. Figure 3 shows two triggers, defined on the previous MAIN_TABLE, that replicate before images to the corre-sponding AUDIT_TABLE. It’s easy to set up a synchronous replication process, and doing so means the audit data is immediately available. However, having the triggers means increased application costs and dependency for the main application on the audit tables’ availability. This introduces some additional risk to your main applications.

Point-in-Time Images You can use audit tables to produce point-in-time images of your data. Maybe for legal reasons, you need to see how an account looked at a particular time. The following example, using audit tables, can provide this ability. Figure 4 shows a base account table

5 0   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

MAIN_TABLE AUDIT_TABLE_____________________ _______________________CUSTIDINTEGER CUSTID INTEGERDATA_COLCHAR(10) STRTTSP TIMESTAMPUPD_TSPTIMESTAMP DATA_COL CHAR(10) END_TSP TIMESTAMP

Figure 1: main_taBLe vs. audit_taBLe

If the MAIN_TABLE contained this data:

CUST_ID DATA_COL UPD_TSP1 ABCD 2008-06-02-12.00.00.000000

and the row for CUST_ID 1 was updated with this statement:

UPDATE MAIN_TABLE SET DATA_COL = ‘DCBA’ ,UPD_TSP = ‘2008-06-03-12.00.00.000000’WHERE CUST_ID = 1;

then the MAIN_TABLE and AUDIT_TABLE would contain:

CUST_ID DATA_COL UPD_TSP1 DCBA 2008-06-03-12.00.00.000000

CUST_ID DATA_COL STRT_TSP END_TSP1 ABCD 2008-06-02-12.00.00.000000 2008-06-03-12.00.00.000000

Figure 2: main_taBLe and audit_taBLe data

CREATE TRIGGER UPDTRG2 AFTER UPDATE ON MAIN_TABLE REFERENCING OLD AS OLDROW NEW AS NEWROW FOR EACH ROW MODE DB2SQL INSERT INTO AUDIT_TABLE VALUES (OLDROW.CUST_ID, OLDROW.UPD_TSP, OLDROW.DATA_COL , NEWROW.UPD_TSP) ;END!

CREATE TRIGGER PPOCSSR.DELTRG1 AFTER DELETE ON MAIN_TABLE REFERENCING OLD AS OLDROWFOR EACH ROW MODE DB2SQL INSERT INTO AUDIT TABLEVALUES (OLDROW.CUST_ID, OLDROW.UPD_TSP , OLDROW.DATA_COL , CURRENT TIMESTAMP) ;

Figure 3: two triggers defined

Page 54: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

and its corresponding audit table. The base table holds current data with an update timestamp reflecting when the last update was performed, and the audit table holds the data as it looked before it was updated. It contains the update timestamp (UPDATE_TSP), which is the time the last update to the data occurred, and the entry timestamp (ENTRY_TSP), which is the time thecurrent update or delete occurred and

the row was placed in the audit table. There’s also a delete flag to indicate that the row was deleted from the base table and now the before image resides in the audit table. For a point-in-time extract, let’s say you want to see your account balance on 12/24/02. You could use the query in Figure 5, which will look for the account information on or before that day. This query would return the following data

from your audit table:

1234 350.00 Market 1111 01/01/02-00:00:00

Rangeimages: There’s also the pos-sibility of using these same tables to do range images. Let’s say you want to see how the account looked at a particular point in time, plus any changes since then. You could use the query in Figure 6, which would return the result from

5 2   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Looking for Solutions?Searchable, focused repository makes finding solutions quick and easy!

Until now there hasn’t been a comprehensive repository of solutions to help companies fully leverage and align their mainframe-based computing environments with the business. When a market opportunity or an information technology problem surfaces at a Fortune 1000-size company, IT and business professionals are faced with scrambling in multiple directions to locate a solution.

The z/Journal Mainframe Buyer’s Guide solves this dilemma by providing one-stop, Web-based access to a wide variety of solutions specifically targeted to your needs. More than 200 solutions are currently listed with more being added each day.

Access the z/Journal Mainframe Buyer’s Guide today by visiting zJournal.com and clicking on “Mainframe Buyer’s Guide” under z/Resources.

New From z/Journal

z/Journal Launches Mainframe Buyer’s Guide.

Figure 4: an account and account audit table design

Page 55: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   5 3

Page 56: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

your account audit table (see Figure 7). This capability lets you view data as it changes over time.

Application Data Corruption Recovery Table auditing and “as of ” imaging can be used to remove data that’s been corrupted by errant application pro-grams. This is referred to as logical recovery, and it lets readers of the data see an image of data before it had been corrupted. This allows for instantaneous recovery from data corruption without incurring an outage. Sound like magic? Readon. The logical recovery process starts with an audit table design similar to the designs previously described. The key to this logical recovery process is the ability to control application recov-ery activation. This typically can be enabled via a DB2 table called the Logical Recovery Table (LRT), whichwill almost always be empty and will always be read by any application pro-gram reading the main tables. If the application ever reads the LRT and itreturns data, then the application will switch into logical recovery mode and readthe“asof ”images.TheLRTcon-tains nothing more than a timestamp column, and will have no rows unless a recovery is needed. At that time, one

row will be inserted with a timestamp representing the desired recovery time. Figure 8 shows the application activa-tion process. Once logical recovery has been acti-vated in the application, it reads a set of views rather than the main tables. Other than reading views, virtually all other application code remains the same. The views then hold the key to the recovery process, and the purpose of the views is to return the “as of” image of the data depending on the value of the timestamp intheLRT.Theseviewslookverysimilarto the previous “as of” examples in this article with the exception that they’re alsoreadingtheLRT.Figure9showsanexample of a “logical recovery view” that returns the “as of” image represented by the logical recovery timestamp. With the application in logical recov-ery mode, a background process can use the value of the logical recovery timestamp to clean up any changes made to data after that point in time. If your applications are reading WITH URandaren’tupdating,thenthesedatacorrections can be made while all read-ers continue reading the “as of ” data images. Once the clean-up process com-pletes, the timestamp can be deleted from the LRT, and normal processingcan begin.

IBM DB2 Futures You may see an “as of” capability built into an upcoming release of DB2. Consideration is being given to provid-ing a snapshot query capability to see what data looked like at a particular time. While this sounds like an exciting feature, it’s a ways out in the future. If you currently have requirements to pro-vide that capability, for now it must come from creative table and code design.

Summary While time travel is still impossible, it isn’t for your data. You have the capa-bility to see your data at a point in time, see ranges of data as it changes over time, and go back in time to see your data as it was before it became corrupt-ed while backing out the corruption. Those are three attractive ways to take a trip to the past using a few DB2 tables and some imaginative code. Z

About the AuthorsSuSan LaWSon and dan LuKSetich are DB2 consultants with YL&A. They have a combined experience of 40 years with DB2 and are involved with several clients worldwide to develop, deploy, and troubleshoot some of the largest, most complex databases and applications. They’re also authors of the DB2 9 for z/OS DBA Certification Guide. They can be reached via the www.db2expert.com Website.

5 4   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

.SELECT ACCT_NUM, BALANCE, UPDATE_TSP FROM ACCOUNTWHERE ACCT_NUM = 1234 AND UPDATE_TIMESTAMP <= ’12/27/03-00:00:00’ UNION ALL SELECT ACCT_NUM, BALANCE, UPDATE_TSP FROM ACCOUNT_AUDIT WHERE ACCT_NUM = 1234AND ENTRY_TSP >= ’01/01/02-00:00:00’ AND UPDATE_TSP <= ’12/27/03-00:00:00’

Figure 6: Query to check range images

SELECT ACCT_NUM, BALANCE, UPDATE_TSP FROM ACCOUNTWHERE ACCT_NUM = 1234 AND UPDATE_TIMESTAMP <= ’12/24/02-00:00:00’ UNION ALL SELECT ACCT_NUM, BALANCE, UPDATE_TSP FROM ACCOUNT_AUDIT WHERE ACCT_NUM = 1234AND ENTRY_TSP >= ’12/24/02-00:00:00’ AND UPDATE_TSP <= ’12/24/02-00:00:00’

Figure 5: ”as of” account image

1234 400.00 Market 1111 12/25/03-00:00:001234 600.00 Market 1122 12/25/02-00:00:001234 350.00 Market 1111 01/01/02-00:00:00

Figure 7: a range of account table changes

CREATE VIEW RCVRY_TBL( CUST_ID, DATA_COL, UPD_TS )ASSELECT CUST_ID, DATA_COL, UPD_TSFROM MAIN_TABLE BASETBINNER JOIN LOGRCVRY AS RCVRY1ON BASETB.UPD_TS <= RCVRY1.RCVRY_TSUNION ALL SELECT CUST_ID, DATA COL , STRT_TSFROM AUDIT_TABLE AUDITBINNER JOIN LOGRCVRY AS RCVRY2ON AUDITB.STRT_TS <= RCVRY2.RCVRY_TSAND AUDITB.END_TS > RCVRY2.RCVRY_TS;

Figure 9: Logical recovery view

Figure 8: application activation Process

DatabaseInterfaceProgram

RowFound?

No Yes

ReadTables

Read LogicalRecovery Views

Page 57: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

IDUG has packed a wide variety of opportunities into one day so that you can experience the best in DB2 education without interrupting your demanding schedule.

The exclusive Management Forum educational programme will include sessions on topics such as:

Data Management Business Value Data Management Trends and Directions The Data Center Garden Harnessing the Power of DB2 for LUW And more!

Register today. Visit www.IDUG.org/managers.

IDUG® 2008 – Europe

Management ForumAn Exclusive IDUG Experience13 October 2008

Hilton Warsaw

Warsaw, Poland

www.IDUG.org/managers

The International DB2 Users Group (IDUG) invites you to attend the newly developed IDUG 2008 – Europe Management Forum, designed exclusively for IT managers and executives.This Forum provides a unique experience where you can build relationships with IBM and industry leading experts, gain insights into trends within the database fi eld, share best practices, and generate ideas with individuals who can relate to your specifi c line of work.

Experience IDUG

IDUG_0640708_MF_Ad.indd 1 7/16/08 11:45:04 AM

Page 58: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

5 6   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Storage & Data ManagementbRUCe fIsheR

automatically identify important data. In a dynamic data center environment the importance of data sets can change from one day to the next and new data sets that are necessary for a successful recovery can be created any time. To keep up with the changing environment, the tool(s) you select must be capable of automatically identifying and tracking important data sets. Look for products that don’t require additional third-party products; however, all the popularutilitiessuchasDSS,FDR,CA-Disk,ABARS,etc.should be supported. Ensure the tool or process you choose is able to address the more subtle aspects of identifying important data sets, including concatenated data sets, VSAM clusters, migrated data sets, and data sets that may never be referenced but must be present for application recovery. Failure to recognize these data set components as being important can and probably will result in a failed recovery. Look for helpful features that automatically identify important data sets that are mirrored and non-mirrored. Even though the cost to store data on DASD continues to fall, the infrastructure costs to mirror data can be prohibitive. Ensuring that only important data is mirrored works to reduce the overall cost. In addition, look for tools that avoid duplicating data that usually is a result of a backup selection process that makes new copies of data sets day after day without regard to whether the data set has changed since the last backup. Backing up unchanged data sets only once can significantly reduce the resources used by the backup process. Auditing, or the ability to report exactly which files are required for recovery, why they’re required, and why other files aren’t required, is a critical component of any backup process. Be sure the tool enables you to monitor the backup and recovery processes. Last but not least, there should be a convenient method to include or exclude data sets based on individual data center policies so the entire process can be tailored to suit your unique policies and practices. Z

About the AuthorBruce FiSher is with Opentech Systems and has more than 30 years of experience developing and marketing DASD and tape-based backup and recovery solutions for IBM mainframes and enterprise servers to ensure business continuity, data availability, archival, and regulatory compliance. Email: [email protected]

Sounds good but what isn’t important? Or, a better question might be, what is important? The simple answer is, only the information necessary to achieve an

efficacious recovery. As cavalier as that answer may seem, it really is that simple. And, backing up only important data can have enormous benefits in terms of reducing costs and resource consumption, improving recoverability, and achieving a greener data center. How can you determine what is important when there are thousands or hundreds of thousands of data sets to consider? An important data set is one that must be available before application recovery can begin. Conversely, a data set that isn’t important for recovery is one that’s created during the recovery process. As a practical matter, identifying important data sets can be an impossible task when approached manually, as data centers often choose to err on the side of caution and back up everything, just to be safe. As a result, they’ve been backing up information that isn’t important to the recovery process. In all fairness, many data centers have had the necessary expertise to do a pretty good job of determining what is and what isn’t important. However, more and more data centers are losing that expertise as the relentless march of the baby boomer generation continues to reduce the mainframe skills pool. The very people who have the expertise to manually manage data classification and backup are moving on to well-deserved retirements and often they aren’t being replaced. To compensate for that loss of expertise, many data centers are choosing to back up their entire system using various remote and local replication technologies. Many of those same organizations also are choosing DASD over tape as their preferred backup media. This is fine until you consider the cost of the resources consumed. More DASD means more power is consumed, which generates more heat, which requires more cooling—all of which requires more floor space. Whereas backing up only the important data sets means you’re backing up less data, which means less equipment is needed, less power is consumed, less cooling is required, and less floor space is required to ensure recoverability and a greener data center. What should you look for when selecting backup tools or developing a process to help you determine what’s important? First and foremost is the ability to accurately and

If It’s not Important, Don’t Back It Up!

Page 59: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Visit the new SHARE Web site!SHARE proudly announces the successful launch of its newly redesigned Web site, www.share.org. � e site features a new look along with upgrades to enhance your user experiences.

Discover improved navigation Utilize the enhanced search functionality Take part in the new discussion forums And much more

� e goal of SHARE’s Web site makeover is to provide enterprise technology professionals a resource for continuous industry updates, education and networking opportunities. Stop by the new site today.

Mark your calendar for SHARE in Austin, where you will gain maximum exposure to the industry’s leading technical issues in order to solve business problems, build a network of fellow enterprise IT professionals and enhance your professional development.

SHARE in Austin March 1-6, 2009Conference & Technology Exchange Expo Austin Convention Center Austin, Texas

Save the Date

Visit austin.share.org for event details as they become available.

D i s c o v e r t h e Po s s i b i l i t i e s w i t h S HA R E

SHARE_0370708_ad8x10.75.indd 1 7/14/08 5:09:23 PM

Page 60: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Let’s take a look at some of the data format “stuff ” that changes with DB2 9 for z/OS. I know that stuff isn’t really a very targeted technical term, but it fits nicely with what

I’ll be addressing here.

Reordered Row Format If you’ve worked with DB2 for a while, especially as a DBA, you’ve probably heard the advice to rearrange the columns of your tables to optimize logging efficiency. Basically, the more data DB2 has to log, the more overhead your programs will incur, and performance will degrade. DB2 logs data from the first byte changed to the last byte changed, unless the row is variable. In this case, DB2 will log from the first byte changed to the end of the row, unless the change doesn’t cause the length of the variable row to expand. When this happens, DB2 goes back to logging from the first byte changed to the last byte changed. So, the general advice goes something like this: Put your static columns (those that don’t frequently change) at the beginning of the row, your dynamic columns (those that will more frequently change) at the end of the row, and your variable columns at the end of each. Well, DB2 9 for z/OS takes this advice to heart (sort of). In New Function Mode (NFM), for new tablespaces, DB2 will automatically put the variable columns at the end of the row.ThisiscalledReorderedRowFormat(RRF);therowformat we’re all familiar with today is referred to as Basic RowFormat(BRF).Thisimpactsonlyhowthedataisstored on disk—it doesn’t mean your DDL is changed nor does it require changes to anything external or how you access the rows. Tosummarize,arowinRRFwillstorethefixed-lengthcolumns first and the variable columns at the end. Pointers within the row will point to the beginning of the variable columns. So far, so good. But over time DB2 also will convert our oldtablespacestoRRF.Oncewe’reinDB29NFM,aREORGoraLOADREPLACEwillcauseachangefromBRFtoRRF.SoifyourunaLOADREPLACEonatableinNFM, its tablespace will have the row format changed to RRF.REORGapartitionandtherowformatforthatpartition changes. And yes, you can have a partitioned tablespacewithsomepartitionsinBRFandsomeinRRF. WithRRFwecanbesurethatDB2isputtingourvariable columns at the end of the row—where they belong. But it still isn’t helping us with placing static columns before the dynamic ones. You’ll still have to guide DB2 to do that.

Do You Want to Ignore Clustering? DB2 9 also offers a new DDL parameter for your tables: APPEND. If you specify APPEND NO, which is the default, DB2 will operate as you’re accustomed to it operating; that is, when rows are inserted or loaded, DB2 will attempt to sequence them based on the clustering index key. If you specify APPEND YES, however, DB2 will ignore clustering during inserts and online LOAD processing. Instead, DB2 will just append the rows at the end of the table or partition. You might want to choose this option to speed up the addition of new data. Appending data is faster because DB2 doesn’t have to search for the proper place to maintain clustering. And you can always re-cluster the table byrunningaREORG. The APPEND option can’t be specified on LOB tables, XML tables, or tables in work files. To track the state of the APPEND option, there’s a new column, APPEND, in the DB2 Catalog in SYSTABLES. Its value will be either ‘Y’ or ‘N’.

Other “Stuff ” These aren’t the only two features that are format-related. DB2 9 introduces a universal tablespace that combines the attributes of segmented and partitioned tablespaces. With partition-by-growth universal tablespaces, DB2 automatically adds partitions as needed to support your rapidly growing data. And remember that you will no longer be able to create simple tablespaces in DB2 9 NFM. Let’s not forget the new data types. DB2 9 delivers DECFLOAT (decimal floating point), BIGINT (8-byte integer),BINARYandVARBINARYtypes,inadditiontopureXML data types. And you can create HIDDEN columns that won’t show up in a SELECT *, which isn’t exactly a format issue, or is it? Don’t forget the index format improvements. DB2 9 allows you to compress indexes for the first time; and you can create indexes on expressions, instead of just on columns. Not to even mention clone tables. … You see, DB2 9 brings us a lot of good format “stuff!” Z

About the AuthorcraiG S. muLLinS is a data management strategist with NEON Enterprise Software. He has worked as an application developer, a DBA, and an instructor with multiple database systems, including working with DB2 for z/OS since Version 1. He also is an IBM gold consultant and author of the DB2 Developer’s Guide and Database Administration: Practices and Procedures. Website: www.craigsmullins.com

DB2 9 Data Format Stuff

5 8   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

CRAIg s. MULLINs

z/Data Perspectives

Page 61: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

MOST APPLICATIONS THAT ARE fundamental to today’s businesses were developed many years ago. Usually, these applications run on large transac-tion processing systems. Today, chang-ing business conditions require flexibility and agile response. Companies need to extend and transform their stra-tegic applications because their value is their reliability and Quality of Service (QoS). In their current state, they can’t be transferred to a Service-Oriented Architecture (SOA) without rewriting. Most companies don’t want to rede-sign or re-create the business functions already present, albeit not reusable, in their IT infrastructure. Instead, compa-nies want tools that integrate the exist-ing functions in other applications while moving business services that are ready for integration into SOA. This article considers the example of the University of Leipzig, Germany, in accomplishing those objectives. (Note: This article reflects information from Fred Stefan’s November 2007 master’s thesis titled “Aggregation of CICS Transactions With the Service Flow Feature,” University of Leipzig.)

Challenges of CICS At great expense over the years, thou-sands of high-quality functions have been developed. These applications:

•Areinflexibleandnotserviceable•Areredundantlyimplemented>

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   5 9

By Fred Stefan,

Benjamin Storz, and

Paul herrmann, Ph.D.

aggregation of CICS Transactions With the Service Flow Feature

Page 62: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

•Merge presentation logic, businesslogic, and database logic and aren’t clearly capsulated

•HavelimitedGraphicalUserInterface(GUI) elements

•Can’t automatically call several CICStransactions

•Can’t simultaneously call a sequenceof CICS transactions; instead, they must be called in a row with no auto-mation

•Can’teasilyaggregateinformationcol-lected by several CICS transactions.

There are three possible solutions to solve these problems: Modernize the user interface, the application architec-ture, or the application connectivity. The first approach involves modern-izing classic CICS applications with new technologies such as Java Server Pages (JSP). This modernization often repre-sents the only meaningful alternative for a complete new revision. To modernize the application archi-tecture, Java and CICS can be used for a stepwise application modernization with reuse of existing application parts. (For further insight, see the article by A. Landenberger titled “Java & CICS—Schrittweise Modernisierung auf dem Mainframe,” JavaSPEKTRUM magazine, May 2007.) Existing applications can be partly modernized while replacing com-ponents of an application (e.g., a CICS COBOL application) with Java compo-nents. To facilitate integration and improve performance, the Java logic executes directly in CICS, so no addi-tional Java 2 Enterprise Edition (J2EE) application server is required. Familiar Java Integrated Development Environments (IDEs) can be used for application development (i.e., editing, compiling and debugging). The output files (e.g., .jars/.ears) can be easily trans-mitted to the mainframe via File Transfer Protocol (FTP). In modernizing application connec-tivity, existing application components can be reused and new solutions can be integrated. This has significant advan-tages, including reduced cost and risk and faster development. Moreover, the CICS Service Flow Feature (SFF) can be used. It also enables the aggregation of legacy CICS applications to form a busi-ness service that’s highly optimized for the CICS Transaction Server (TS) envi-ronment. The SFF lets you implement business services by composing a sequence of CICS application interac-tions. It automates the interaction with 3270 terminal-based applications and

exposes a business-level service.

CICS Modernization With the SFF The SFF has an optional feature of CICS TS since Version 3. It enables composition of CICS applications to create CICS business services from existing CICS applications components for integration into an SOA, business process collaborations, or enterprise solutions that exploit a loose coupling approach. SFF processing encompasses build-time processing and run-time process-ing. SFF provides components that extend CICS TS to run generated busi-ness services and additional tools for WebSphere Developer for System z (WDz) to develop these business ser-vices. SFF consists of the Service Flow Modeler (SFM), which is integrated in the WDz studio environment, and the ServiceFlowRuntime(SFR),whicharemiscellaneous adapters delivered with CICS. SFF delivers:

•An integrated graphical developmentenvironment that enables the creation of CICS business services by compos-ing a flow of CICS application interac-tions. (A flow is a reusable composed business function that exposes a pro-gramming interface to a service requester.)

•Ageneratorthattransformsthecom-posed flow of CICS application inter-actions to form a run-time application that retains the inherent QoS provided by the existing CICS application implementation. This run-time appli-cation is also highly optimized for the CICS environment.

•A run-time component extends theCICS TS environment. It provides adapters that use CICS interfaces to invoke the CICS terminal-oriented transactions and COMMAREA pro-grams as required by the service flow.

During the development, the SFM insulates the developer from details of the CICS application implementation. CICS application interfaces, including COMMAREAs and 3270 transactionscreens, can be imported as components into the SFM workspace. Later, these components are aggregated to produce a business service adapter that extends the value of already existing CICS appli-cation components with a business ser-vice interface. Meanwhile, the SFR adapters allowaccess to existing CICS transaction and application interfaces. Non-invasive

6 0   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

SFF allows composition of CICS applications to create CICS business services for Soa integration, business process collaborations, or enterprise solutions that exploit loose coupling.

Page 63: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

techniques are used so the CICS appli-cation assets aggregated by SFM don’t have to be modified to support the busi-ness service flow. This allows fast reuse of existing components while minimiz-ing the risk for new development.

Installation of the SFR The following software products must be properly installed to use the CICSSFR:

•CICS Transaction Server for z/OSV3.1 or later

•IBMEnterpriseCOBOLforz/OSandOS/390 V3.1 or later

•z/OS V1R4.0 Language Environmentor later

and optionally:

•Webserviceenablement•MQSeries for OS/390 V2.1 or

WebSphere MQ for z/OS V5.2 or later.

The installation and configuration of theCICSSFRV3.1wasperformedonaz/OS host from the University of Leipzig. ADCD z/OS 1.8 was installed on the

host, so CICS TS V3.1 is working prop-erly and the COBOL compiler prerequi-sites are fulfilled. For more information, see the CICS Service Flow Runtime User’s Guide, Version 3 Release 1, Seventh edi-

tion, IBM (5655-M15); Nov. 22, 2005.

SFF Explained by the CICS Catalog Manager Sample Application To better understand SFF, you can

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   6 1

DINO-RTD_Banana-O_zJournal halfP1 1 7/12/2008 2:37:44 PM

Figure 1: Schema of the cicS catalog manager Sample application

Page 64: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

install and configure the CICS Catalog Manager sample application. (To learn more, see “Arnold, I., Backhouse, C., and Compton, L. et al: “Application Development for CICS Web Services,” IBM Redbook SG24-7126-00, May2006.) The CICS Catalog Manager is a demo application that’s part of every CICS TS installation. It’s a working COBOL application designed to illus-trate the connection between CICS applications and external clients and servers. The catalog manager applica-tion accesses an order catalog that’s stored in a VSAM file. This application provides the function to list details of the catalog that’s stored in the VSAM file. Later, a selected item can be ordered in a specific quantity. The catalog is then updated and represents the new stock levels.

Figure 1 shows a schema of the sam-ple application. It provides the functions to list all items in the catalog (Inquire Catalog), list details of an item in the catalog (Inquire Catalog), and select a quantity of that item to order (Place Order). The catalog is then updated to reflect the new stock levels. The following describes the steps necessary to model a business service out of already existing business func-tionality with the SFM and deploy this businessservicewiththeSFR.

The Service Flow Modeler The SFM, which is integrated in WDz, is a multi-functional, Eclipse-based tool framework. RDz 7.1 is thecurrent version of WDz 7.0 and is part of the IBMRational SoftwareDeliveryPlatform. With the SFM it’s possible to:

•Modelnewlyaggregatedbusinessser-vices or flows using existing processes or services and their interfaces

•Acquireandrecordexistingscreensorcommunication area interfaces and afterward generate new SOA conform interfaces

•Generate adapters that support thecollection of information (occurring in request and response processing) and the data transformation activities between the flow and the interface

•ExposethecreatedbusinessflowsasaWeb service.

The flows that are modeled with the SFM are used by the SFF generator to produce:

•COBOL source code that representsthe modeled business service behav-ior

•Run-timepropertiesthatarerequiredto support the CICS configuration

•AlltheJCLrequiredforcompilation•AnexecutableWSDLfile.

The SFM is part of WDz and pro-vides several tools and perspectives for service flow development. These are for instance importers, editors, and the run-time code generator. When open-ing a resource for editing, the default editor associated with that resource opens in the editor area of the current perspective. The included editors can be used to browse or edit resources that have been created or imported in the development environment. Figure 2 illustrates what the developer does to build the Service Flow with the WDz:

1. Import and edit existing transac-tions

2. Model the Service Flow and make several changes

3. Generate and deploy Adapter ser-vices using build time templates.

After importing the CICS Catalog Manager Screens and passing the sequence of CICS screens, the flow automatically created in the flow mod-eler will look like Figure 3. The flow is now ready to be modified or can be aggregated with other CICS applica-tions’ respective flows. After finishing, the generators pro-duce a run-time component that uses CICS-provided capabilities such as the CICS Business Transaction Services and the Link3270 Bridge. This ensures a highly optimized implementation for the CICS environment that preserves

6 2   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Figure 2: development Processes in the Wdz during the Build-time

Figure 3: automatically created Flow after Passing Several cicS Screens

Page 65: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

The future runs on System z andyour future begins today

Attend the System z Expo to hone your System z skills and learn how the new IBM System z10™ EnterpriseClass server, green by design, provides significantly lower total cost of ownership, reduced energyconsumption, improved security, andmore. More than 300 sessions cover a range of technology topics and newproduct advances including:

• IBM System z10 Enterprise Class server

• IBM z/OS®, IBM z/VM®, IBM z/VSE™and Linux updates

• IBM WebSphere® Application Serverfor z/OS, DB2, IBM CICS® and IBMIMS™ support

• The latest functions in IBM GDPS®and IBM Parallel Sysplex® support

• Service oriented architecture (SOA)and the System z platform

• System z leadership in virtualization

• System z technology and the greendata center

• System z security

• System and storage managementtools

Technical experts from IBM System zdevelopment, IBM Business Partners and customers share their experienceand expertise for an unforgettable week.Join us at the IBM System z Expo andtake a technology deep dive to furtherleverage your System z investment.

Tuition: US $2,295

Attention Exhibitors: Exhibition andsponsorship opportunities are available.Find out how you can participate byclicking on “Sponsorship packages” atthe conference Web site below.

IBM System z ExpoFeaturing z/OS, z/VM, z/VSE, Linux on System z

October 13 – 17, 2008 Las Vegas, Nevada

Climb on the fast track to career success Learn from the best and get inspired

Visit ibm.com/training/us/conf/systemz for more information or to enroll.

IBMSYSTEMZAD 5/9/08 1:36 PM Page 1

Page 66: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

the inherent QoS delivered by the existing CICS application implementa-tion. Each time an adapter is generated with the SFM, special templates of the run-time server are used to accom-plish the system requirements. Having created the run-time component, it’s automatically sent to the host by the SFM and can be easily deployed by the developer.

The Service Flow Runtime The CICS TS is extended with the SFR component from theCICS SFF. Itprovides components and configura-tions required for flow sequence orches-tration. This run-time component delivers adapters for access to existing CICS transaction and application inter-faces. So, non-invasive techniques are used. That’s why CICS application assets orchestrated by the service flow need not be modified to support the CICS business service flow. A fast reuse of existing assets is ensured. The CICS SFRenablesanyapplicationthat’smod-eled with the SFM and capable of initi-ating a CICS program to access:

•Existing CICS transactions using aDistributed Program Link (DPL)

•CICS and IMS applications using a3270 datastream

•WebSphere MQ-enabled applicationsusing WebSphere MQ.

Figure 4 provides an overview of what happens during run-time process-

ing. (For more information, see Backhouse, C., Hollingsworth, J., Hurst, S., and Pocock, M., “Architecting Access toCICSWithinanSOA,”IBMRedbookSG24-5466-05, October 2006.) After using the generated JCL to install the adapter services, the CICS SFRcanbeinvoked.Thisautomaticallyoccurs when a service requestor passes a message header and the application data to the appropriate stub program in CICSSFR. It’s necessary to define resources to CICS each time an adapter is generated with the SFM, because CICS needs to know which resources to use, what their properties are, and how they interact with other resources. A series of steps are required to deploy the adapter ser-vice to CICS. The SFM is used to define several elements and characteristics that describe how the adapter service will run in the CICS SFR environment.When an adapter is generated, a special deployment pattern is used. These deployment patterns refer to how an adapter service meets the defined pro-cessing pattern in the CICS SFR envi-ronment. When an adapter service is generat-ed, it’s necessary to manually compile it in the run-time environment as part of the deployment process. During the CICSSFRsetup,customizedbuild-timetemplates were generated that are required to deploy adapter services in the run-time environment. Each time a new adapter is deployed,

it’s necessary to define resources to CICS, as CICS needs to know which resources to use, what their properties are, and how they interact with other resources.

Conclusion The SFF delivers an interesting mod-ernization method for CICS applica-tions because it:

•Reusesandintegratesexistingapplica-tions in available architectures

•Delivers more possibilities than onlymodernizing the user interface

•CanaggregatemultipleCICSapplica-tions

•Provides open interfaces to use thebusiness services

•Decouples presentation logic andapplication business logic.

A newer version of the SFF, available with CICS TS V3.2 and IBM RationalDeveloper for System z V7.1 since October 2007, has several advantages:

•CICS SFR 3.2 supports RationalDeveloper for System z Version 7.1

•Screenrecognitionenhancements•Supports Web Services Description

Language (WSDL) 2.0•Reuse of the generation properties

files•Messageandmappingdraganddrop•Expressvariableinsert.Z

About the AuthorsFred SteFan examined this topic in the context of his master’s thesis in cooperation with IBM. He’s pursuing his doctorate at the University of Leipzig, where he examines agile, lightweight integration approaches of application systems.Email: [email protected] BenJamin Storz is an IBM Accredited IT specialist. He has seven years of experience in the IT field; four years were focused on System z application development and modernization. He holds a German Diploma in Computer Science and a B.Sc. (hons) degree in Computer Science.Email: [email protected]. PauL herrmann is responsible for the Mainframe Project at The Institute for Computer Science at the University of Leipzig. He received his doctorate in 1973 in Physics. Since 2000, he has coordinated the Mainframe Project at the University of Leipzig. His main areas of research and teaching are focused on mainframe technologies and system administration technologies.Email: [email protected]

6 4   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Figure 4: overview of the Processing that occurs during the run-time Processing

Page 67: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Want to Know What DB2 Is Doing?   Take a 

Closer Look at DB2’s TRACE

f asked what the three biggest chal-lenges are with DB2 for z/OS, the answer you might hear most often is, “Performance, performance, and per-

formance.” The most jammed sessions at conferences cover performance top-ics. The most popular articles are on performance, as are most questions on lists and forums. Is the DB2 for z/OS community obsessed with performance? Are there more performance issues with a main-frame database? Well, no more or less than any other platform. Performance is just something that’s always been a priority. ... The mainframe, and DB2 for z/OS, both offer what can be a blessing and, at times, a curse: great metrics. There are records describing just about everything. How do you obtain this information? One way is with DB2 traces. Often, when someone has a DB2 for z/OS performance question, one of the best first responses is, “What traces were you running?” After all, how can you determine if something isn’t doing what you expect, within the parame-ters you expect, if you have no idea what it’s doing? That can be a real problem. You don’t always have the luxury of running an application a second time in an attempt to obtain the same performance results just so you have the opportunity to turn on the appropriate traces. Should you constantly run all the possible traces you might need? No. At some point, such a “checkup” can cause more damage than the disease. You should be aware of what traces you’re running and their cost. Be careful of

others that recommend running certain traces. You must always consider how much harm a trace might potentially cause. Tracing your DB2 subsystem is important and some monitoring should almost always be performed, but what’s the cost of running one or more of DB2’s traces? The average possible cost of the most popular trac-es is documented in the DB2 manuals; you can refer to:

•Chapter 26, “Improving ResponseTime and Throughput” in the DB2 V7 and V8 Administration Guides (SC26-9931 and SC18-7413, respec-tively)

•Chapter 21, “Using DB2 Trace toMonitor Performance” in the DB2 9 Performance Monitoring and Tuning Guide (SC18-9851).

But the story doesn’t end there. If you believe what’s published in the manuals, you had better make sure you pick up the correct manual for the DB2 you’re running. The numbers differ, depending on the version of DB2. Differences between the V7 and V8 manuals are minimal, and are more substantial when moving from V8 to DB2 9. Some of the changes are subtle or affect traces you shouldn’t run anyway. Consider DB2’s global trace. Using it can be extremely expensive and we’ve been incessantly warned against run-ning it. DB2 V6 and V7 state the over-head can be from 20 to 100 percent. The DB2 9 book has completely different values: 10 to 150 percent. These values seem credible. Don’t turn on this trace

unless you do so at Level 2 support’s suggestion when trying to track down a service problem. The more commonly used traces and what you should know about them is discussed in the following sections.

Statistics Trace Leave this on. It’s “on” by default in your DSNZPARM member and youwould have to do something adverse to turn it off. Using the default, you get classes 1, 3, 4, 5, and 6. Class 6 is a fairly recent addition to the default list. This was the class that gave you Instrumentation Facility Component Identifier (IFCID) 225, your DBM1 address space storage map. When prob-lems with the DBM1 storage started back in V7, IFCID 225 records were needed to help determine where the storage problem existed. Initially, it was tough to get people to turn on the extra class 6 recording. Even today, some still don’t see the importance of having this information available, so eventually the IFCID 225 record was just added to stats class 1. Now everyone continuous-ly gets it. What about the other statistic classes?

•Systemservices(IFCID001),databasestatistics (IFCID 002), open page set detail (IFCID 105), buffer pool infor-mation (IFCID 202), and what system parameters were in effect at the time the trace was started (IFCID 106) are all recorded as a result of having class 1 started. IFCID 225 also is included with class 1.

•Class 3 has your lock timeout details(IFCID 196), deadlock (IFCID 172),

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   6 5

By Willie Favero

I

Page 68: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

lock escalation (IFCID 337), group buffer pool stuff (IFCIDs 250, 261, and 262), data set extension informa-tion (IFCID 258), indications of long-runningURsandreaders(IFCID313),and active log space shortages (IFCID 330).

•Class 4 provides information aboutexceptional conditions; for example, a -905 from a dynamic SQL statement exceeding its RLFASUTIME (IFCID173), and lots of stuff relating to dis-tributed processing exceptions.

•Class 5 is necessary only if you’reusing data sharing and record details on global buffer pool statistics (IFCID 230).

For all practical purposes, you can assume that stats records are written only at the end of each statistical inter-val. That’s why the overhead is negligi-ble when turning on this trace. In fact, no percentages are even quoted in the product manuals. Most of the statisti-cal counters are maintained in memory and are just written out in the SMF data record at the DB2 statistics inter-val. The overhead is more like noise when it comes to CPU and SMF data volume with SMF writing out only the 100 and 102 records. This is one DSNZPARMyou’rebetteroffjustleav-ing alone. Use the defaults to ensure that subsystem-level statistics are always available. If you’re running a z9, z10, or z990, your stats interval should be quite low; it should be set to five minutes or less. It’s amazing how much can hap-pen on a z10 in just a wall clock min-ute. You can always lump together multiple intervals, but your stats inter-val is the most granularity you’ll ever achieve. With a bigger number (15, 30, or more), you have huge intervals with practically no granularity. This can make it quite difficult to identify where a system problem might exist. Even at an interval as low as one min-ute, you’ll record only a maximum of 1,440 stat intervals per day. For these reasons, I strongly recommend setting the interval to one minute, regardless of processor model. The cost of a sta-tistics trace is still little more than noise. A smaller degree of granularity is needed to study your system’s growth leading up to a system slow-down. It also will enable better virtual storage planning. If the statistics inter-val is too large, then high points and critical events become more difficult to identify.

Audit Trace The audit trace has gained populari-ty with all the compliance and regulato-ry issues prevalent today. Audit traces aren’t that expensive to run. Up through V8, the overhead was cited as about 5 percent. In the DB2 9 manual, that number has been bumped up to less than 10 percent, which is still a fairly low number if the trace helps you sleep at night. That’s mostly for OLTP with all classes active. For utilities, batch, and some queries, the overhead would be negligible. Remember, the amount ofoverhead is directly dependent on the number of tables you’re auditing or the frequency of the event being audited. Starting an audit trace with no class-es specified will get you only class 1, tracing all access attempts denied due to inadequate authorization (IFCID 140). For many, this is adequate and costs the least to run. Indicating you want class 2 will track all explicit GRANTs and REVOKEs (IFCID 141).Additional cost would be proportional to the number of times a GRANT orREVOKE was used. Invoking class 3will trace all CREATE, ALTER, andDROPoperationsagainstauditedtablesor a table with multi-level security with row-level granularity (IFCID 142). Classes 4 and 5 will record the first attempted change to an audited object in a unit of work (IFCID 143) and the first attempted access to an audited object within a unit of work (IFCID 144), respectively. An audited object is a table that has been created or altered to include the AUDIT keyword. Specifying the audit class 6 trace will write a record for each invocation of the bind process for both static and dynam-ic SQL statements that involve audited objects (IFCID 145). When activating an audit class 7 trace, you will record:

•An issuance of a set current SQLIDstatement (IFCID 55)

•The endingof an identify request foran IMS, CICS, Call Attach Facility (CAF),RecoverableResourceManagerServicesAttachmentFacility(RRSAF),utility, or Time Sharing Option (TSO) connection (IFCID 83)

•When a SIGNON by IMS, CICS, orRRSAF occurs, or for a DatabaseAccess Thread (DBAT) that might have changed the authorization id (IFCID 87)

•A distributed authid translation(IFCID 169)

•An audit trail for security processing(IFCID 319).

6 6   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

IFCIDs

Instrumentation Facility

Component Identifier

(IFCID) identifies an

event that can be traced

or monitored. Multiple

events or multiple IFCIDs

can be grouped together

to form a trace class. An

IFCID can occur in more

than one trace class.

One or more trace class-

es are defined to each

of the five trace types.

IFCIDs can be invoked

by their ID, by specify-

ing a class, or by simply

specifying a trace type.

—WF

Page 69: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

The last class available for an audit is class 8. It records the start (IFCID 23), phase change (IFICD 24), and end (IFCID 25) of a utility.

Accounting Trace There are many ways to affect the cost of an accounting trace. Which trace classes you choose to run will have an impact on the cost of running that trace. The DB2 9 manual provides time estimates for classes 1, 3, 7, and 8. Class 1 is usually less than 5 percent unless the query is fetch-intensive and multi-row fetch isn’t used. Class 3 is less than 1 percent, and classes 7 and 8 are less than 5 percent. It’s the class 2 times that can get tricky. CPU overhead for class 2 trace can range from 1 to 10 percent, leaning toward the high end for fetch-intensive programs. Class 2 overhead is on a per SQL request basis (entry/exit to DB2). The simpler the base SQL cost, the bigger the overhead in percentage terms. Class 3 overhead is based on a per wait event basis. Previous versions of DB2 give no numbers for classes 1, 3, 7, and 8. They talk only about class 2. They quote online activity at 2.5 percent and batch as high as 10 percent. Not using class 2 will not work. Class 1 is the portion of the transaction time that DB2 is aware of while class 2 is the portion of the class 1 time spent in DB2 (what’s affectionately referred to as “in SQL” time). Class 3 is the portion of the class 2 time spent waiting (lock/latch contention, sync I/O, update commit, etc.). Class 7 is your “in DB2” time and class 8 is your wait time for packages, similar to plan’s class 1 and 2 times. You need them all if you’re trying to trouble-shoot an application performance prob-lem. V8 also introduced an additional class for packages. Class 10 contains package detail (IFCID 239). The detailed information for class 10 is available only when classes 7 and 8 also are active. If there are no classes 7 and 8, then there’s no class 10 output. Be cautious when specifying class 10 because it can be expensive. Here are breakdowns of IFCIDs associated with the most commonly used accounting classes:

•Class 1 contains IFCIDs 3, 106, and239.

•Class2,IFCID232•Class 3, IFCIDs 6-9, 32, 33, 44, 45,

117, 118, 127, 128, 170, 171, 174, 175, 213-216, 226, 227, 242, 243, 321, 322, and 329

•Class7,IFCIDs232and240•Class 8, IFCIDs 6-9, 32, 33, 44, 45,

117, 118, 127, 128, 170, 171, 174, 175, 213-216, 226, 227, 241-243, 321, and 322

•Class10,IFCID239.IFCID239exist-ed prior to V8 and came into play when there were more than 10 pack-ages. In V8, with classes 7 and/or 8 active, only sections 1 and 2 of IFCID 239 will be written. However, for class 10, data sections 3, 4, and 5 are filled out.

Note the IFCID similarities between classes 2 and 7, and classes 3 and 8.

Performance Trace If you need to run a performance trace, you need to run it and you aren’t going to worry about the cost; you’re probably trying to solve a more press-ing problem. Performance trace classes 1 (background events), 2 (subsystem events), and 3 (SQL events) can be from 5 to 100 percent overhead, depending on your DB2 subsystem activity. Fetch-intensive activity will drive the performance class 3 trace toward 100 percent, so be careful. You’ll usually turn on this trace when trying to solve a specific problem. In all, there are 22 classes you can choose from when running a performance trace in addition to three classes available for your own use. Those last three classes can come in handy. When using a performance trace, specify only the classes, plans, autho-rization IDs, and IFCIDs you really need to solve your problem. This will minimize the amount of trace data being collected. DB2 9 enhances the START TRACE by allowing greaterqualification granularity using the EXCLUDE and INCLUDE keywords. You also should use the Generalized Trace Facility (GTF) as the trace des-tination. GTF is the default destina-tion for a reason. Trace records are immediately available at the conclu-sion of the trace and there’s no need to deal with SMF to see your trace results. You do need to be careful if the GTF trace data set is on disk; the trace could wrap. You also want to make sure the trace runs for as little time as possible. Start the trace just before the event you’re trying to trap and stop it as soon as the event con-cludes. This also will help minimize the amount of trace data gathered. All the standard traces are interest-ing, but running them is of no use if you

don’t examine and take advantage of the reports they can yield. Make sure you have the proper reporting tools in place to interrogate and interpret your trace results. If you’re going to run a trace, regardless of the cost, they’re all expen-sive if you don’t examine trace outputs. Again, for most DB2 traces, the cost isn’t all that great, and sometimes it’s just something you must absorb. Knowing the cost upfront may make it easier to plan when to run a trace. Running some DB2 traces can (andusually will) be critical to the success of your DB2 for z/OS database. Now that you have decided to run a few of the DB2 traces, you might benefit from the following hints, which can enhance your DB2 tracing experience.

Starting a Trace STARTTRACEisaDB2command.As such, almost everything you’ll ever need to know about running a trace can be found in the DB2 Command Reference manual (V8 is SC18-7416 and DB2 9 is SC18-9844). The manual tells you how to start and stop a trace. However, start-ing a trace could be quite intriguing. There can be more to starting a trace than simply issuing the start command. Anappealing featureof theTRACEcommand is the IFCID keyword. What if you want to trace the mini bind and SQL statement? You can start a perfor-mance class 3 trace to trace SQL events. If you do, you’ll be writing out of 25 dif-ferent IFCIDs when you really want only IFCIDs 22 and 63. Instead, you should start a performance trace speci-fying class 30, which is a class DB2 doesn’t use, followed by the IFCID key-word listing the specific IFCIDs you want to trace. Now, rather than writing out more than 25 IFCIDs, you’re record-ing only two. That’s a much better deal. Add in the plan ID and authid on the –START TRACE command and you’llhave a nice, crisp trace output. Every trace type has a couple of IFCIDs reserved for your use. DB2 refers to them as “available for local use.” For all your traces, there are numer-ous tracing keywords you can use to reduce the amount of trace data you collect. If you’re concerned about the cost of tracing, this can help you keep things under control. There are key-words you can use to limit your trace collection to specific plans, packages, collections, authids, locations, and many more; all are listed and explained in the DB2 Command Reference manual. You should take time to review what’s possi-

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   6 7

Page 70: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

ble with START TRACE besides juststarting a trace.

DSNZPARMs You also should take advantage of DSNZPARM keywords. The couplethat affect how tracing works in DB2 are fascinating. How you should use them, and what you should specify as values for these keywords, seem to constantly change. Consider, for exam-ple, STATIME on the DSN6SYSP macro. This is the interval in minutes that DB2 uses to write out statistics records. Both V8 and DB2 V9 have lowered the default for STATIME to five minutes, but even five minutes may be too high for some. Many cus-tomers have lowered STATIME to its lowest value—one minute. Statistics are written only at a statistical interval (STATIME). At one minute, that’s only 1,440 intervals each day. I find it very handy to copy the SMF 100 and 102 records to separate Generation Data Group (GDG). If the SMF data is need-ed at a later time to produce additional reports, having these records in a sepa-rate data set will facilitate faster post processing. If you do decide to lower this value, let your z/OS systems folks know. Although it’s not a tremendous amount of additional trace records, it is more. You don’t want to be respon-sible for filling up one of the SMF MAN data sets in the middle of the day. There’s another DSNZPARM key-word to consider. The value the “UNICODE IFCIDS” entry on installa-tion panel DSNTIPN supplies (or the UIFCIDSkeywordontheDSNZPARMmacro DSN6SYSP) tells DB2 whether selected character fields in the IFCID record should be written out in Unicode or left encoded in the same way as pre-vious releases. The descriptive portion of any field in the IFCID that will be encoded in Unicode will be preceded with the string %U. Another handy DSNZPARM key-word is SYNCVAL on the DSN6SYSP macro. If you have multiple DB2s, as in a data sharing environment, you want to ensure all the statistics records are simultaneously written out, regardless of which member they’re created on. SYNCVAL tells DB2 how many minutes after the hour to start the statistics interval. If you use five on all members, they’ll all write out their statistics records at five minutes after the hour at whatever interval you specified on the STATIME. This will help you coordi-

nate stat records between the multiple DB2s or any other process you’re attempting to sync up with. SYNCVAL also has definite value when trying to syncupwithRMFreportingintervals. There also are installation fields and DSNZPARMs that control whether amonitor trace should automatically start at DB2 start-up (MON keyword on the DSN6SYSP macro) and the size of buffer for storing the records sent by the monitor trace (MONSIZE keyword on the DSN6SYSP macro).

Managing Trace Data Trace records are usually written out to System Management Facility (SMF) data sets. You need to tell your z/OS systems folks ahead of time which SMF records you’ll be collecting for DB2. DB2 uses a couple of SMF records when gathering trace data. Accounting goes to SMF 101, Audit ends up in SMF 102, and a bunch of statistic IFCIDs (0001, 0002, 0202, and 0230) go to SMF 100 records. The rest of the statistics IFCIDs end up in SMF 102 records. A performance trace defaults to GTF. However, you can route it to SMF if you choose. GTF is a much better choice because you often want to get to the trace data as soon as possible. If you’re seeing elongated and unex-plained shutdown times for your DB2 subsystem, you may want to check how SMF recording is set up. In the PARMLIBmember SMFPRMxx, makesure the DDCONS(NO) is set. Specifying NO avoids the cost of con-solidating DDs at shutdown. Other parameters can affect SMF’s perfor-mance, such as maintaining three or more SMF data sets, making the SMF data sets as large as possible, using large buffer sizes when dumping data sets, and using the largest blocksize available for the output data set SMF dump uses. Another consideration about IFCIDs is that they’re all defined (every single field) in a PDS member called DSNWMSGS. If you’re still on V7 (now out of service), you’ll find DSNWMSGS as a member of the PDS hls.SDSNSAMP. For those running V8 and DB2 9, the PDS containing the DSNWMSGS mem-ber is called hlq.SDSNIVPD. DSNWMSGS has a description of each field in an IFCID and contains a detailed description of what the IFCID is used for and a list of trace classes that use the IFCID. An IFCID can be used by mul-tiple different trace classes. To make it easier to access the information in the member DSNWMSGS, DSNWMSGS

contains the DDL to create a table (and tablespace) where you can load all the information included in the DSNWMSGS member, the appropriate LOAD utility control cards, and a cou-ple of sample queries for retrieving the IFCID information in a variety of orders. DSNWMSGS also includes a list of every trace class, by trace type and which IFCIDs are included in that trace class, a complete list of the definitions of all Resource Manager Identifiers(RMIDs) used in tracing, and a list ofwhich mapping macros (DSNDWQxx) map to which IFCID.

Conclusion To wrap things up, consider a few general statements about using the TRACEcommand:

•If you start several traces, rememberto bring them down and stop them. It’s true that your accounting, statis-tics, and probably your audit traces will continuously run, but perfor-mance traces need to be managed and run only for brief periods of time.

•If you’re starting extra traces (tracesnot included in your DB2 start-up), for whatever reason, only the traces specified in yourDSNZPARMmem-ber will automatically start when you bring up DB2. All other traces, those extra traces terminated at shutdown, will not restart when DB2 restarts.

•You should ensure you’re workingclosely with whoever is responsible for your SMF data. Most traces, with the exception of a performance and moni-tor trace, default to SMF. Changing what trace classes you’re running could increase the amount of trace data produced. This could have a neg-ative effect on SMF recording if your changes aren’t coordinated with the SMF team. This is one example where working closely together will benefit everyone.

•Ofcourse,youneedaprocessinplaceto examine your trace data for both your daily traces and those traces you start for special occasions. Z

About the Author WiLLie Favero is an IBM senior certified IT software specialist and the DB2 SME with IBM’s Silicon Valley Lab Data Warehouse on System z Swat Team. He has more than 30 years of experience working with databases with more than 24 years working with DB2. He speaks at major conferences, user groups, publishes articles, and has one of the top technical blogs on the Internet.Email: [email protected]

6 8   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Page 71: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   6 9

With Web-enabled applications and constant demands for availability, it’s more important than ever to rely on intelligent, automated software solutions to manage

DB2 performance, administration, and recovery operations. DB2 for z/OS professionals have so many things to deal with—ever-expanding quantities of data, new releases of DB2 from IBM, higher transaction volume, and fewer experts in the field. To make things even more interesting, IT budgets are shrinking and many shops are shrinking through attrition. So how can the mainframe DB2 professional ensure that DB2 is available and performing optimally? Adding more staff is generally not possible. Automated solutions that provide intelligence advice on maintenance and tuning may be just what the mainframe DB2 professional needs.

Change Management Recently,Ihadaphoneconversationwithacustomerwho told me his company had lost its DB2 guru earlier this year. This customer is a relative newcomer to DB2; his background is in the programming area. So when he was faced with a complex operation, he relied on the assistance of robust software solutions to help him make a structure change to an existing DB2 application. This process can involve unloading data, dropping and re-creating a structure, and then reloading the data. It’s a complicated and risky process. Even the most experienced DB2 professionals can miss the changes required for related structures and indexes. If elements become out of sync, application availability and performance are affected. Change management is one area that demands automated software solutions.

Performance Another area that begs for assistance is performance. Many DB2 system parameters can affect overall throughput, depending on the workload. DB2 is a relational database management system, meaning a complex component called the Optimizer is responsible for determining the access path to the data. Users can—and do—get creative when coding a query to DB2, possibly joining together data from several tables into a single result set. Such queries can touch millions of rows of data if the structure and query don’t align. Obviously, one query that touches millions of rows of data will cause performance problems. Identifying potential poor-performing queries before they happen is the optimum solution. When the query is part of an application program known to DB2, the query is bound to DB2, generating some metrics regarding the probable access path. Being able to compare a workload

before and after a change allows the user to make structure or query changes that would prevent a long-running query from negatively affecting overall system throughput. Some DB2 queries come into the mainframe environment through an external source such as WebSphere. Frequently, the query has never been seen by DB2 before, so the access path is dynamically determined. This type of query can run a very long time, and the DB2 professional will not know about it until after the fact. Because long-running queries can lead to poor performance, it’s important to be able to capture the query information and analyze it. Once you know what’s happening in your DB2 environment, you can modify the structure or query to perform better. Knowing this type of information ahead of time is invaluable and can prevent unwanted delays and outages.

Recovery A final area of concern is recovery. While most organizations have a sophisticated disaster recovery plan, many don’t have a workable plan for the much more common phenomena: local, application-level recovery. Power failures, hardware failures, software errors, and operator errors can all lead to an application outage. However, most DB2 professionals rarely (if ever) have to do a recovery, so when it does happen, they’re unfamiliar with the process. This unfamiliarity leads to inefficiencies, which lead to longer outages. You can see why automation and intelligent guidance as well as innovative utility processes are critical success factors for quick, accurate DB2 recoveries.

Summary As DB2 becomes more complicated, and as the DB2 support professional ranks shrink due to retirement and transfers, it becomes critical to acquire tools that automate processes and advise you about improvements you could make. Performance, database administration, and recovery are critical areas that are perfect for automation and advisor technology. The success of your business may depend on it. Z

About the Author ricK Weaver has more than 30 years of experience in systems and database administration for IMS and DB2, and has been involved in developing large, complex, mission-critical applications in a variety of business areas. At BMC since 1994, he has worked in software consulting, professional services, product marketing, and product management. He is currently product manager for DB2 solutions, responsible for setting product direction and managing release requirements for the BMC Software DB2 for z/OS product line. Email: [email protected]

automation Is the Key to DB2 Success

Aligning IT & Business by RICk WeAveR

Page 72: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

While most of the technical press focuses on the sexy side of the Internet, such as Web

2.0, Service-Oriented Architecture (SOA) and the like, big, boring batch and transaction processing systems remain the bread and butter of many large organizations. The lifeblood of these systems, many running z/OS, is electronic data exchange. The Internet has fundamentally changed the rela-tionship between business partners; mainframes previously connected only to private networks using proprietary communication protocols have been forced into the open systems arena. Simply put, z/OS mainframes are rou-tinely expected to exchange files over the Internet using a wide variety of for-mats, tools, and protocols, many of which aren’t natively supported on z/OS. What seems like it should be a sim-ple problem—exchanging files with your business partners—can quickly turn into a complicated mess since each seems to have their favorite combina-tion of protocols, compression methods, and encryption algorithms:

•Protocol: FTP, FTPS, SSH/SFTP, HTTP, HTTPS, etc.

•Compression/packing: ZIP, GZIP, TAR,etc.

•Encryption algorithm: PGP, SSL/TLS, CMS/PKCS#11, etc.

Solving the exchange format and pro-tocol requirements is only half the battle. When transferring files to platforms other than z/OS, you also must consider:

•Translating from EBCDIC to othercodepages

•Converting record-oriented data setsto byte-oriented files: choice of line separators, truncation or wrapping of long lines, trimming of trailing pad characters, etc.

•Support for z/OS data set organiza-tions, record/block formats, and allo-cation parameters.

In addition, careful attention must be paid to security issues such as:

•Authentication: userids, passwords,key pairs, tokens, etc.

•Authorization: controlled access tofiles and system resources

•Networksecurity,firewalls,etc.•“Dataatrest”:securityofintermediate

files created as data is transformed or relayed.

7 0   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Extending z/OS With Linux: a Multi-Protocol File Exchange gateway

By KIrK WoLF anD STEVE goETzE

Page 73: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Meeting these requirements with z/OS alone can be a nightmare. Many excellent z/OS products are available to address these issues, but their combina-tions can be complex and costly. Each often involves a new, unique configura-tion of tools, Job Control Language (JCL), scripting, coding, testing, and capacity planning. A solution exists. A wide variety of tools are available on Unix/Linux that make this easy to do and they’re all free. Some organizations are even compelled to completely abandon z/OS and con-vert to an open systems platform, choos-ing instead to confront a whole new set of problems. So why not combine the best of both worlds? In this article, we describe how to use Linux as a gateway for exchang-ing files over the Internet with your business partners, while retaining z/OS operational control of the processes and data. We show how Linux and free or open source tools can effectively be used to extend proven z/OS technology. The hardware and software require-ments are surprisingly minimal. Here’s what you’ll need:

•Hardware: a 512MB/10G Intel PC or Linux on System z guest or Logical Partition(LPAR)

•z/OSsoftware: - IBM ported tools for z/OS such as

OpenSSH, a free feature - Co:Z Co-processing toolkit (free

Apache 2 binary license)•Linuxsoftware: - Your favorite distribution of Red

Hat, SUSE, Ubuntu, Debian, etc. - OpenSSH, curl, gpg, gzip, bzip2,

infozip (all free open source) - Co:Z Co-processing toolkit (free

open source).

The Co:Z Co-processing toolkit allows z/OS batch jobs to securely launch a process on the Linux gateway, redirecting standard input and output streams to traditional z/OS data sets or spool files. In addition, the process launched on Linux can “reach back” into the z/OS job and access MVS data sets, converting them into pipes for use by other Linux commands. The Co:Z Co-processing toolkit is installed in two parts: a free binary-only z/OS package and an open source “tar-get system” package. Target system packages are available as Linux LSB RPMs andWindows and Solaris bina-ries. Written in portable C++, the source can be built on other Unix or Portable

Operating System Interface for Unix (POSIX) platforms. The remaining Linux software (OpenSSH, curl, etc.) is installed with your Linux distribution either by default or using the distribution’s package man-ager. The examples in this article assume you’re running Linux with bash as your default shell. Other Unix variants and shells can be used, but the examples will need to be modified accordingly. We’ll be transferring z/OS data sets, stored on the mainframe, but we don’t want to store them even temporarily on the Linux box. Taking this approach addresses “data at rest” security issues and leaves us with fewer things to worry about. In this configuration, the z/OS sys-tem will initiate and control file trans-fers (both outbound and inbound) with a batch job step. All file transfer mes-sages will be logged as part of the job, and return codes may be used to control the flow of the job stream. A z/OS operator should never have to log onto the Linux machine to determine the status of a file transfer. In this article, we rely heavily on the

Linux curl package to handle the actual file exchange with our business part-ners. Curl’s flexible command-line interface supports all the standard file transfer protocols and authentication methods. The curl command lets you send or receive files and redirect its file I/O to pipes. Simple Linux shell scripts, coded directly in JCL, can be used to chain together curl with other com-mands to meet the requirements of exchanging a file with a particular busi-ness partner. Specifically, you can use pipes to combine the curl command with:

•The Linux zip or gzip commands tocompress or decompress data as it’s transferred

•The Linux gpg or gpgsm commandsto encrypt or decrypt data as it’s trans-ferred

•The Co:Z toolkit fromdsn and todsncommands to convert z/OS data sets to or from pipes.

In Figure 1, the Co:Z launcher is exe-cuted in a batch job step (1). This creates an SSH session to the Linux gateway

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   7 1

Figure 1: Send a z/oS data Set using FtPS

// EXEC COZPROC,PARM=’[email protected]’//ORDERS DD DSN=ORDER.DATA,DISP=SHR//STDIN DD *fromdsn //DD:ORDERS |curl -T- --ftp-ssl-rqd –user user:paswd \ ftp://partner.com/orders.txt//

Page 74: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

machine as user “gwuser” using a public/private key pair. A Unix shell is started on the Linux gateway, which executes the commands contained in the STDIN DD. The first line (2) runs the fromdsn shell command on Linux, which reaches back into the launching jobstep via the Secure Shell (SSH) connection and converts the datasetreferencedbyDDORDERStoastream of bytes. This stream is piped (|) into the curl command (3), which opens an FTP Secure (FTPS) connection to the remote host, partner.com, and uploads the data to “orders.txt.” Let’s consider some of the security aspects of this setup:

•Normal z/OS security controlswhichdata sets and resources are available to this job, which runs as a normal (unprivileged) user.

•The Linux machine can be placed in a network Demilitarized Zone (DMZ). The only connection to z/OS is an encrypted SSH session with the Linux gateway, authenticated by an SSH key pair.

•ThedataisneverstoredontheLinuxsystem, but instead simply piped by the curl command over a Secure Sockets Layer (SSL)-encrypted File Transfer Protocol (FTP) connection to the remote host.

By itself, however, this first example isn’t compelling; the z/OS Communications Server/FTP product can, with work, be configured to do FTPS (SSL) directly. Consider Figure 2. In this example, a data set with variable-length binary records is sent to a business partner using HTTP. The -b -l ibmrdw options on the fromdsn command create a binary stream with records delimited by IBM-style record descriptor words. This data is piped into the gzip com-mand for compression. The compressed output data is piped into the gpg com-mand to be encrypted. Finally, curl sends theencrypteddatausinganHTTPURL.This example shows how Linux pipes can be used to quickly connect powerful open source tools with z/OS data sets, offloading much of the processing to an

inexpensive hardware platform. When this job runs, stdout and stderr output from the Linux shell are redirected to the job’s STDOUT and STDERRDDs,whichbydefaultaresentto JES SYSOUT files. For this example, the job’s output looks like Figure 3. In this case, gzip and gpg don’t generate any messages, but Figure 3 shows out-put from fromdsn, todsn, and curl. The condition code from the batch job step is adopted from the Linux shell script exit code (RC=0), so it can be used toinfluence the flow of subsequent job steps. When using multiple commands connected with pipes, the default behav-ior is to return the exit code of the last command. The set -o pipefail bash option is used to cause the shell’s exit code to be set to the last non-zero exit code. It’s important to use this option so intermediate errors can be detected. The previous examples show out-bound file transfers. Inbound exchanges can be similarly performed. In Figure 4, curl is used first to download a file using SFTP (SSH). The output is piped into the todsn command, which uses RDWs to separate binary records andwrite them to a z/OS data set. No extra encryption step is required since SSH has already done that.

More Information You can learn more by visiting these Websites:

•“Curl”http://curl.haxx.se/•“GnuPG”http://gnupg.org/•“Gzip” http://www.gnu.org/software/

gzip/•“IBM Ported Tools for z/OS:

OpenSSH” www.ibm.com/servers/eserver/zseries/zos/unix/openssh/index.html

•“Co:Z Co-Processing Toolkit for z/OS” http://dovetail.com/coz.

Conclusion The file transfer gateway shown here is just one example of using Linux to extend z/OS; there are many other pos-sibilities. The ability to leverage the flex-ibility of Linux and its wealth of open source software under the control of z/OS is a topic that hasn’t received much attention, but is ripe for exploration. Z About the AuthorsKirK WoLF and Steve Goetze are partners in the firm Dovetailed Technologies, LLC. They are the authors of the IBM JZOS Batch Toolkit for z/OS SDKs. They specialize in Java and z/OS Unix consulting and product development, and are regular presenters at SHARE.Email: [email protected] or [email protected]

7 2   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

fromdsn(DD:STDIN)[N]: 5 records/400 bytes read; 183 bytes written ...fromdsn(DD:ORDERS)[N]: 78 records/6240 bytes read; 6552 bytes written ...todsn(DD:STDERR)[N]: 317 bytes read; 5 records/313 bytes written ...todsn(DD:STDOUT)[N]: 0 bytes read; 0 records/0 bytes written ...CoZLauncher[N]: [email protected] target command ‘<default shell>’ ended with RC=0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2035 0 0 0 2035 0 1735 --:--:-- 0:00:01 --:--:-- 1735 100 2035 0 0 0 2035 0 1461 --:--:-- 0:00:01 --:--:-- 0

Figure 3: z/oS Job output From Sending a z/oS data Set using httP, etc. (See Figure 2)

// EXEC COZPROC,PARM=’[email protected]’//TRANS1 DD DSN=TRANS.DATA,DISP=SHR//STDIN DD *set -o pipefailfromdsn -b -l ibmrdw //DD:TRANS1 |gzip -c - |gpg -r key-1 --batch --output=- --encrypt=- |curl -T- http://remotehost.com/upload?partner=023//

Figure 2: Send a z/oS data Set using httP, compressing With gzip and encrypting With PgP

// EXEC COZPROC,PARM=’[email protected]’//PAYROLL DD DSN=PAYROLL.DATA,DISP=(NEW,CATLG),// DCB=(RECFM=VB,LRECL=800),// SPACE=(CYL,(3,1))//STDIN DD *set -o pipefailcurl –user uname sftp://remotehost.com/payroll.dat |todsn -b -l ibmrdw //DD:PAYROLL//

Figure 4: download a File using SSh SFtP to a z/oS data Set

Page 75: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

his article examines some common myths and rules of thumb regarding Workload Manager

(WLM) and assesses their relevance and rationale. First, a note regarding termi-nology: WLM manages work based on service class periods, of which some service classes may have only one or may have multiple periods. This article refers to service classes to avoid the awkwardness of referring to “service class periods” in every instance. Service classes need to be considered from the perspective of periods, when multiples exist, rather than the service class only.

1.Toomanyserviceclassesarebad:This myth is so prevalent that it has almost become a rule of thumb to sug-gest you should have no more than 20 or 30 service classes. Since inactive service classes have no work, they exert no influ-ence on WLM decisions and shouldn’t be included in any count. The SYSTEM and SYSSTC service classes shouldn’t be counted since they represent no goal management criteria and also exert no influence on WLM decisions. The actual number isn’t important, but the rationale for it should be considered. It’s also important to understand that it’s the ser-vice class period, rather than the service

class itself, that’s the unit of work man-agement you need to consider. WLM will help only one service class period during a policy adjustment interval (which occurs every 10 sec-onds). The most limiting element regarding the number of service classes is the degree to which WLM is expected to help service classes meet their goals. So it stands to reason that the more ser-vice classes there are, the longer it will take to act on all of them if adjustments are needed. Similarly, since there are really only six usable degrees of impor-tance (including discretionary), the more service classes there are, the less separation there is in determining receiver/donor relationships. For a given amount of work, if work is spread across too many service classes, then each will have only a small amount of data available for decisions, and the results may be more volatile. Similarly, if too much work with different character-istics is put together, then the data may be too varied to make good decisions. The best approach is to match up com-parable types of work into a service class, where the data obtained from each unit of work reflects the overall behavior of the group. In this way, the actions taken at the service class period level are beneficial to all the units of work.

2.Discretionarygoals aregoodorbad: Discretionary goals are, by defini-tion, workloads that have no goal or objective. Some installations swear by them, while others indicate that discre-tionary work will never have access to run. Discretionary work must have idle capacity available to it. If resources are tight, discretionary work will have dif-ficulty obtaining access. If higher impor-tance work doesn’t consume all available resources, discretionary service is pos-sible. If higher importance work will use all the available resources, discretionary work won’t run.

3. Velocities should always be low:This myth originated with the lower number of processing engines initially available on systems when WLM was first introduced. As the number of engines a Logical Partition (LPAR) can use hasincreased, the attainable velocity has increased with it. A simple example is:

Number of LPs/number of actively competing

tasks

This calculation represents the highest attainable velocity for a particular ser-vice class. The larger the number of LPs (or comparable capacity) available to competing tasks, the greater the ability

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   7 3

By gerhard adam

Page 76: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

to attain velocities up to 100 percent. Moreover, it’s important that velocity goals not be confused with priority assignment, since an excessive velocity definition will tend to result in WLM ignoring the service class rather than creating aggressive management.

4. Velocities shouldn’t be numeri-callyclosetogether: Velocities are evalu-ated per service class, so it doesn’t matter what the velocity is between different service classes. It may be reasonable to have a velocity of 30 for one service class and 31 for another. The numerical sepa-ration refers to the fact that, within any service class, a small numerical change won’t appreciably change the calculated performance index, so to see WLM actions change, the goal differences must be more significant. For example, assume a service class has a velocity goal of 40, but is achiev-ing only 35. If this workload can use more resources, then to get an appre-ciable change in WLM management, the goal should be set much higher. That’s because WLM management is based on the Performance Index (PI). A goal of 40 with 35 being achieved results in a PI of 1.1. This doesn’t indicate a serious problem. If you changed the goal to 41 or 42, you would see a PI of 1.2 for both instances. This isn’t a sig-nificant difference to signal more aggressive management. However, a change to 50 or even 60 would result in PIs of 1.4 or 1.7, respectively, which would likely result in the desired atten-tion to change access to resources. This assertion assumes that the workload can actually use the higher access to resources. Simply setting a higher velocity goal won’t cause WLM to “force” the work-load to use the resources. Again, if the goal is unattainable and too high, the result will be WLM tending to ignore the service class over repeated policy adjustment intervals. Any actions WLM takes also will be determined by the importance level of the workload miss-ing its goals.

5. WLM gives resources (or takesthem away): WLM doesn’t control resources; it simply assigns queue posi-tions (i.e., dispatching priority) or pro-tects resources from being taken by system management functions (i.e., storage protection). The exception is in the case of server address spaces WLM manages. Whether WLM initiators or application environments, WLM will control the number of servers based on

resources available and installation defi-nitions. This aspect of WLM control can raise or lower utilizations, which will affect the accessibility workloads may require. WLM controls access to resources based on the goals defined to a service class. WLM can’t give work to the CPU or take it away. It can simply determine what position in the dispatch queue a unit of work has. Competition still occurs normally based on the behavior of individual units of work. WLM can’t cause a CPU to run at 100 percent; it can only attempt to equi-tably distribute that 100 percent utiliza-tion based on an installation’s specified goals. WLM can’t directly prevent work from running. The exceptions are only in those instances where specific com-mands (QUIESCE) or service class defi-nitions (RESOURCE GROUPS) areused to restrict the service allowed. Indirectly at high utilizations, it may be possible that some degree of Multiprogramming Level (MPL) control could act in this capacity, but it’s uncom-mon. In addition, with such a high utili-zation, it’s unlikely that much resource would be available to lower importance work anyway. For non-swappable work, this isn’t a consideration.

6.WLMishighlyinvolvedinmeet-ing goals: WLM has limited involve-ment in whether or not a service class period meets its goals; it has few options available to assist a service class period. Based on the goal, WLM will assess how well a service class is doing by simply calculating the PI. This will determine what service class should be helped to attain its goals. But what can actually be done? If work is logically swapped out and can’t come in, then perhaps increasing the MPL can reduce this problem. If paging is impacting throughput or response time, then perhaps storage protection can prevent z/OS page steal-ing from removing working set frames. If CPU access is a problem, then increas-ing the dispatching priority may help. In the latter case, an evaluation must be performed regarding which service class can give up dispatching priority or how workload can be rearranged to satisfy everyone’s goal requirements. If no such change can occur, no action is taken. Theexception iswhen theLPARman-agement component of Intelligent Resource Director (IRD) is active. Inthis instance,WLMcouldadjustLPARweights to obtain the necessary CPU

resources to alleviate the problem. This results in some interesting con-sequences. Dispatching priority can come from only service classes that are higher or equal in the dispatch queue, so donors aren’t likely to come from low importance work. This is one of the most compelling arguments against hav-ing too many service classes at high importance levels. They will tend to cre-ate conflicting objectives and an envi-ronment where resources may be perpetually stolen between service class-es in an attempt to achieve balance.

7.WLMunderstands themeaningandsignificanceofgoals:Regardlessofthe rationale for a particular goal defini-tion, remember that WLM lacks any “understanding” of what that means. It’s simply a numerical representation of what an installation has determined is a reasonable value to be attained. So, one-second response times or a velocity goal of 50 have no fundamental meaning to any particular goal. The significance must come from the installation, which presumably selected its value because it reflects something indicative of the desired behavior. WLM will simply assess this goal against actual behavior to calculate the PI, which is used to evaluate how well the goal is being met. WLM can’t deter-mine whether the goal is reasonable or whether it will result in the desired workload behavior. Many of these myths emerged as extensions of simple rules of thumb when service policies were first estab-lished. So, when evaluating a WLM policy, it’s important to examine what’s realistic and how effectively goals are managed, including levels of sharing, distribution, etc. Rules of thumb arerarely adequate to provide real benefit beyond a simple starting point. Z

About the Author Gerhard adam, president of SYSPRO, offers 35 years of experience in large systems computing with specialization in performance, capacity planning, and z/OS internals. He has been involved in software development, extending from access method interfaces and telecommunications to performance reporting software. He also has developed courses for IBM Education in MVS internals, performance, client/server computing, and systems management. Currently, he is teaching classes around the country on a variety of z/OS and WLM topics. In addition to working with various education companies, he has shared his expertise with Protech, CA, the former Landmark Systems Corp., National Bank of Detroit, Grand Trunk Western Railroad, and the U.S. Marine Corps. Email: [email protected]

7 4   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Page 77: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

The economic impact of rising energy costs is far-reaching. “Going green” has reached criti-cal mass, with “green-friendly”

replacing “low-carb” as the catchphrase that won’t go away. The need to address both energy use and cost-efficiency applies to every aspect of the economy. That’s why IT executives are asking ques-tions about the energy and cost-efficiency of the data centers and servers that power the global economy. They’re looking, appropriately, for more efficient, less complex, and more environmentally aware alternatives for data centers.

The Rise of Linux on System z Linux for System z has emerged as a proven solution for handling the grow-ing problems of data center energy costs

and business infrastructure inefficiency. Next year, Linux on System z will cele-brate its tenth birthday and the platform has more momentum than ever. Application and service providers, soft-ware tool vendors, and a growing pool of skilled workers are making Linux on System z an increasingly attractive investment. IBM’s introduction of specialty pro-cessors is helping a new generation of mainframes address the continuous need for more sophisticated, flexible, and affordable IT systems. The support-ing network of vendor software to sup-port Linux on System z also keeps expanding. The question now is not how or why to implement Linux on System z, but when. Leveraging Linux on System z can

significantly enhance business efficien-cy. It lowers power and cooling costs and can help IT deliver value to the business because it facilitates rapid pro-visioning and on-demand scalability.

Making the Case for Linux on System z IT executives face the challenge of controlling rising costs even as business demands are causing an upward spike in computing power and real estate requirements. Addressing data center power requirements and consumption alone has become a daunting task. Many data center managers spend too much time addressing hardware power and real estate demands and not enough on efficiently running a data center. How can today’s data center manag-er cope with these two seemingly oppo-

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   7 5

E n a B L I n g g r E a T E r

Business EfficiencyW I T h

Linux on System zB y C h a r L E S J o n E S

Page 78: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

site demands? One proven solution is server consolidation through virtualiza-tion. While there’s no one-size-fits-all virtualization solution, an attractive option for customers who have an infra-structure investment in System z is to implement virtualization using Linux on System z and z/VM. Several customer success stories have been published touting the use of Linux on System z and z/VM together. The stories are beneficial because they offer supporting information on best practices that can provide other organizations a roadmap to follow that includes meth-ods for measuring and achieving a solid ROI. A well-prepared business case is crit-ical when procuring a new z9 or z10 EC with the proper specialty processor con-figuration to take advantage of Linux on System z. Preparing a business case lets IT professionals present exact numbers to management and vendors that cover how to make virtualization using Linux on System z profitable. It’s often possible and helpful to compare the recom-mended approach to keeping the cur-rent infrastructure in place. Sohow isROIachievedwithLinuxon System z? Virtualization is a key component to address IT’s requirement to control costs yet meet business needs with flexible systems. This approach involves leveraging the reuse of existing assets. The System z specialty proces-sors also can help you leverage existing assets.

How Specialty Processors Can Help Specialty processors are System z processors that cost less than the System z General Purpose Processor (GPP); these processors include:

•z9 Integrated Information Processor(zIIP) processors, designed to process DB2 workloads

•SystemzApplicationAssistProcessor(zAAP) processors, designed to offload Java workloads from the GPP

•Integrated Facility for Linux (IFL),dedicated to running Linux workloads on System z.

IFLs can be configured to run z/VM with many Linux on System z guests, or alternatively, can be split into Logical Partitions (LPARs) to run z/VM orLinux (see Figure 1). ThismeansROIcanbeimmediatelyachieved simply by using Linux on System z virtualization. What is the cost to run 100 Unix/Linux servers stand-alone vs. virtualiza-tion using Linux on System z? There are several quick wins related to Linux on System z virtualization. Besides reduced power consumption and cooling charges from hardware consolidation, you gain other important benefits:

•Once you port your server farms toLinux on System z, you can immedi-ately leverage all the traditional rigid reliability, availability, scalability, secu-rity, and serviceability infrastructure

support inherent with System z. This lets you leverage your z/OS and z/VM backup procedures, DASD manage-ment, Hierarchical Storage Management (HSM) infrastructure, and automated operations.

•Physical networking, cabling, androuter infrastructure are simplified. Moreover, you probably don’t need to add personnel or skills, since your existing Unix/Linux administrators can now administer Linux on System z. Linux-oriented administrators tend to intuitively adapt when the main-frame is presented to them simply as a big resource with outstanding reliabil-ity, redundancy, and—in the case of z/VM—a very flexible, highly configu-rable BIOS.

•Potentiallysignificantcost-savingscanbe realized in terms of OEM product license fees. Moving a rack-mount Lintel application bed with a large number of CPU cores to a System z platform with a smaller number of IFLs can yield additional savings by reducing the number of processors that have to be licensed in support of the application. Anything licensed on a cost-per-core basis could see real dollar cost improvements.

Business Efficiency A more efficient, on-demand busi-ness is one where processes are suffi-ciently integrated with customers, suppliers, partners, and employees so it can quickly capitalize on business

7 6   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Figure 1: Linux on System z configuration Scenarios

Page 79: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

opportunities or address potential threats as they arise. One critical aspect of an on-demand business is its ability to effectively scale up and down to meet changing business conditions. Consider the well-documented suc-cess of Nationwide’s use of virtual serv-ers to respond to anticipated consumer demand. In 2006, Nationwide promoted its re-branded Website—hosted on its mainframes—with an intensive ad cam-paign during the Super Bowl. They used virtualization to significantly increase computing capacity for a two-week period to handle increased traffic to www.nationwide.com, then decreased capacity after the event. Successful initiatives such as Nationwide’s take careful planning and strategizing throughout the company. Yet what if you don’t have the foresight to prepare for this type of growth? Time is of the essence, and if IT isn’t agile in its ability to react, you’re likely to miss out on the opportunity. Consider the scenario where there’s an anticipated surge in transaction rates due to a business event. Let’s say your computing infrastructure is using Linux on System z virtualization and tradi-tional z/OS workloads. The primary

presentation layer is running on Linux on System z on your application server of choice, and the business logic and data reside on z/OS in CICS and or IMS and DB2. Let’s also say your architecture cur-rently comprises 100 Linux on System z servers and you’re currently paying for how much your z/OS System z machine uses vs. total capacity. Based on your capacity planning estimates, you’ll need to ramp up your total System z infrastructure by 25 percent. Now, quickly doing the math, you’ll need to increase your Linux on System z images from 100 to 125 and your increased z/OS charges against the GPP will be 25 percent for the dura-tion of the peak activity. So what does this increase cost you in dollars? Your largest expense will be the 25 percent increase in the additional activity incurred by the z/OS GPP. Because you’re adding virtual servers to existing hardware, there’s no need to procure additional real estate, power outlets, or cooling ducts. The only other cost you would incur is the labor associated with allocating the additional 25 Linux on System z servers. Depending on your procedures,

this allocation could simply be an auto-mated task requiring literally a few min-utes using a vendor provisioning tool. Or, if you don’t have such a tool, it’s likely there are streamlined processes in your organization to easily clone Linux on System z images based on various Linux on System z server profiles and configuration requirements. What do you do with the extra capacity once your peak load is over? You simply shut down your virtual machines, which will give back the resources to z/VM. What has this cost you? Sure, your system administration time is used, but that’s already on the payroll.Reallytheonlyadditionalover-head is the usage charge for the increase on the z/OS GPP. So, what if you went about the same process without doing virtualization? At a minimum, you would have to implement more hardware servers to meet the increased demand. Hardware servers take longer to procure than vir-tual servers for several reasons, and there’s also server configuration to con-sider. What will you do with the addi-tional hardware servers when your workload diminishes? They’ll begin to become expensive because you have

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   7 7

Figure 3: hybrid deployment—integration components on Linux and z/oS

Figure 2: distributed deployment—all integration components on Linux

Page 80: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

idle hardware consuming floor space, power, and cooling.

Optimizing the Data Center The ’90s, with the Web phenome-non, saw an exponential growth in processing requirements and power. Now, customers and prospects expect to do business with you over the Internet. With this trend, the need for storefronts and customer service repre-sentatives diminished. However, there are trade-offs in the cost of doing busi-ness on the Web. We’re now seeing a global adoption of standards such as XML and SOAP-based services. These standards simplify and mask the differences between tech-nologies and platforms, making it easier to map business processes and adapt them to changing business require-ments. This has created more demands on data centers to keep up with the MIPS requirements to parse XML and SOAP payloads while handling the additional components necessary to better align IT to business demands. New types of software platforms have recently evolved, such as the Enterprise Service Bus (ESB) and Business Process Management (BPM) suites that require middleware to inte-grate and orchestrate legacy applica-tions. These allow for reuse of services, a key to controlling costs. Linux on System z and IFLs are ideal solutions for running these newer soft-ware platforms. Besides the overall bene-fits of virtualization, the biggest advantage here is that IFLs are a cheaper alternative to running work on the z/OS GPP. HiperSockets, a high-performance TCP/IP socket connection between LPARS on the same physical System zmachine, represent a valuable feature you can exploit using Linux on System z. HiperSockets provides a virtual mem-ory-to-memory network that requires no cabling, as compared to traditional, non-virtualized servers. In addition, you can communicate from Linux on SystemztootherLPARsrunningeitherLinux on System z or z/OS, so you can establish direct connectivity from an integration server or ESB running on Linux on System z using HTTP, MQ, or anything that uses TCP/IP directly into z/OS subsystems such as CICS. Architecture requirements for IT are diverse. Within the same organiza-tion, you may find that different busi-ness areas have different requirements. That’s where the adaptability of your ESB, BPM suite, or integration mid-

dleware comes into play. In the case of integration middle-ware, several architecture options are commonly used to facilitate the reuse of legacy applications. Some of the Integrated Development Environments (IDEs) available from software vendors offer portable architectures, letting you deploy various run-time options that make sense for current needs.

Deploying Linux on System z Let’s consider some real-world exam-ples to best illustrate how this technolo-gy is being used today: The first scenario is where you would deploy your integration run-time com-ponents on Linux on System z to expose CICS and IMS applications as XML-based business services. This case is ideal for situations when you don’t want to incur the additional MIPS consump-tion of the z/OS GPP. Perhaps a busi-ness area can’t or doesn’t want to modify System z because of existing policies or procedures (see Figure 2). Clients or consumers of the business service make requests directly to the Linux on System z image where your integration server is running. The inte-gration server then sends the requests to the z/OS image via connectors to access the legacy data sources.

Hybrid Deployment on Linux on System z and z/OS Now consider an architecture where the integration run-time components are deployed on both Linux on System z and z/OS. This solution is useful when you want to take advantage of the com-puting power of both the IFL and z/OS GPP. This scenario is ideal when you need to leverage the scalability of the z/OS infrastructure while offloading XML parsing to the less-expensive com-puting power of IFL (see Figure 3). Clients or consumers of a business service still make requests directly to the Linux on System z image. Instead of the request going directly to the integra-tion server, the request first passes through a SOAP and XML parser, and then the parsed payload goes to the integration server on the z/OS image via HiperSocket HTTP or MQ requests over TCP/IP. Once the integration server is fin-ished with the request, it sends the pay-load back through Linux on System z, where it’s transformed back into a SOAP/XML payload before the reply is sent back to the client. This approach seems to be fairly popular. Following

the three-tier model, it’s not unusual to put front-end and application servers into Linux on System z under z/VM, but refer “heavy lifting” for database activity to DB2 on a separate z/OS instance.

Hardware Appliances A category of solutions called hard-ware appliances are becoming popular. These simplify and reduce the overall complexity of doing XML processing, security, encryption, and integration. Like IFLs, hardware appliances offload MIPS usage from the System z GPP. You should investigate hardware appliances as you would any other infra-structure element. How many applianc-es will you need today vs. in the future? What would the alternative be to using Linux on System z? How would a mix-ture of the two work?

Conclusion You’re likely already exploring the possibilities of Linux on System z if you’re a System z customer. Linux on System z can move you toward a more efficient, flexible business infrastruc-ture. Some Linux on System z case stud-ies have documented up to an 80 percent savings in floor space and energy con-sumption. Such savings, coupled with reduced data center space requirements, can save companies literally tens of mil-lions of dollars. This alone should per-suade even the harshest skeptics to consider exploring virtualization initia-tives using Linux on System z. Of course, there’s no one size-fits-all solution. Consider the several available tools that can enhance your architecture and evaluate your options in light of both short- and long-term business needs. There’s a wealth of information and expertise out there to help you build a solid business case for becoming more efficient with Linux on System z. As you improve your efficiency, you’ll be among the many consumers and businesses now discovering that the most successful solutions are those that help both the environment and the bot-tom line. Z

About the Author charLeS JoneS is a System z solution architect for Seagull Software, a subsidiary of Rocket Software. He works with Rocket Software’s SOA and integration middleware products, specializing in LegaSuite solutions for CICS, and provides architecture assistance for Rocket Software solutions that interface with CICS. Email: [email protected] Website: www.seagullsoftware.com

7 8   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

Page 81: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

Reading a pass-along

copy of z/Journal ?

Why?

Sign up for your own free subscription

now at:

www.zjournal

.com

(Free subscriptions areavailable worldwide.)

a D i N D e XcOMPaNY WeBsite PaGe

advanced Software Products group www.aspg.com 14________________________________________________________________________________

Blue Phoenix www.bphx.com 13________________________________________________________________________________

Bmc Software www.bmc.com 5________________________________________________________________________________

Brocade www.brocade.com 9________________________________________________________________________________

Bus-tech www.bustech.com 31________________________________________________________________________________

cdB Software www.cdbsoftware.com 51________________________________________________________________________________

cmg www.cmg.org 35________________________________________________________________________________

cole Software www.colesoft.com 25________________________________________________________________________________

dino-Software www.dino-software.com 15, 61, iBc________________________________________________________________________________

edge information group www.edge-information.com 45________________________________________________________________________________

iBm System z expo www.ibm.com 63________________________________________________________________________________

idug www.idug.org 55________________________________________________________________________________

illustro Systems www.illustro.com iFc, 1, 41________________________________________________________________________________

innovation data Processing www.innovationdp.fdr.com 33, Bc________________________________________________________________________________

Jolly giant www.jollygiant.com 11________________________________________________________________________________

macKinney Systems www.mackinney.com 21________________________________________________________________________________

opentech Systems www.opentechsystems.com 7, 47________________________________________________________________________________

relational architects www.relarc.com 53________________________________________________________________________________

responsive Systems www.responsivesystems.com 3________________________________________________________________________________

Share www.share.org 57________________________________________________________________________________

Software diversified Services www.sdsusa.com 23________________________________________________________________________________

SSh communications Security www.ssh.com 17________________________________________________________________________________

velocity Software www.velocitysoftware.com 29________________________________________________________________________________

William data Systems www.willdata.com 39________________________________________________________________________________

z/Journal Buyer’s guide http://directory.zjournal.com 52

z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8   •   7 9

Page 82: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

8 0   •   z / J o u r n a l   •   A u g u s t / S e p t e m b e r   2 0 0 8

data by regulators and law enforcement entities concerned with the immutability of certain data. Wrote NetApp: “The regulators want proof that [certain] data has not been altered or tampered with. NetApp de-duplication does not alter one byte of data from its original form. [It is] just stored differently on disk. One interesting point though, is what happens if a ‘false fingerprint compare’ as previously described with inline de-duplication occurs. Now the data has been changed. Because of this, inline de-duplication may not be acceptable in regulatory environments.” NetApp’s position earned it flames from many of the other survey respondents. While they all carried the party line that de-dupe is simply describing data with fewer bits, they accused NetApp of flying a self-serving flag of Fear, Uncertainty and Doubt (FUD) by raising the issue of the acceptability of de-duplicated data—more specifically, data de-duplicated the Diligent way—to regulators and courts of law. I’m beginning to wonder about this characterization. Some of my clients, especially financial institutions, are creating policies to exclude certain files from de-duplication processes on the off chance the data will be viewed as “altered” by folks at the Securities and Exchange Commission (SEC), Department of Justice (DOJ), and elsewhere. Despite reassurances by de-dupe vendors that the technologies are defensible, and the noble quest of IT folk to “do more with less” by squeezing more data onto fewer spindles, none of this means anything if the courts decide that data so squeezed is no longer “full, original and unaltered.” Thus far, there has been no test case to make or break the argument that de-duplicated data is OK. My recommendation to IT practitioners who read this column is simple: Before deploying de-dupe technology, which is finding its way into mainframe VTLs as we speak, touch base with your legal or risk management department honchos. Explain (or, better yet, have your vendor explain) how de-dupe works on your data and get a written approval for deploying the technology. Keep an original copy in a waterproof, fireproof container—don’t file it electronically in a de-duplicated repository! While there’s no case law about the acceptability of de-duplicated data, there’s plenty of precedent for legal disputes rolling downhill from the director’s or senior manager’s offices to the trenches of IT. When it comes to de-dupe, the objective of the IT practitioner must be one of self-defense. Don’t be “the dupe.” Z

About the AuthorJon WiLLiam toiGo is a 25-year veteran of IT and the author of 13 books. He also is CEO and managing principal partner of Toigo Partners International, an analysis and consulting firm serving the technology consumer in seven countries. Email: [email protected]; Website: www.it-sense.org

Don’t Be the Dupe

A short while back, IBM announced its acquisition of Diligent Technologies, a player in the data de-duplication space. De-duplication is a marketecture umbrella

describing a range of technologies used to squeeze data so that more of it fits on a disk spindle—especially if that spindle is part of a Virtual Tape Library (VTL)—or so it can be moved across a thin WAN link more efficiently. On the announcement call, analysts asked the usual questions about the overlap between Diligent functionality and de-dupe functionality IBM already offered customers, courtesy of its relationship with Network Appliance (answer: Big Blue continues to offer both technologies), and what this meant for Diligent’s existing resale arrangements with competitors such as Hitachi Data Systems and Sun Microsystems (answer: both have continued). The story might end here, were it not for some recent statements by Network Appliance about key differences between the two de-dupe technologies. According to NetApp, Diligent’s method of de-duplication puts data at risk of “non-compliance.”Respondingtoasurveyofde-duplicationvendors that I created on my blog, NetApp contrasted its approach with that of “inline de-duplicators” such as Diligent. Inline de-duplication’s main benefit, NetApp’s spokesperson wrote, “Is that it never requires the storage of redundant data; that data is eliminated before it is written. The drawback of inline, however, is that the decision to ‘store or throw away’ data must be made in real-time, which precludes any data validation to guarantee the data being thrown away is in fact unique. Inline de-duplication also is limited in scalability, since fingerprint compares are done ‘on the fly’; the preferred method is to store all fingerprints in memory to prevent disk look-ups. When the number of fingerprints exceeds the storage system’s memory capacity, inline de-duplication ingest speeds will become substantially degraded.” He continued, “Post-processing de-duplication, the method that NetApp uses, requires data to be stored first, and then de-duplicated. This allows the de-duplication process to run at a more leisurely pace. Since the data is stored and then examined, a higher level of validation can be done. Post-processing also requires fewer system resources since fingerprints can be stored on disk and hence require fewer system resources during the de-duplication process.” “Bottom line,” the fellow contended, “if your main goal is to never write duplicate data to the storage system, and you can accept ‘false fingerprint compares,’ inline de-duplication might be your best choice. If your main objective is to decrease storage consumption over time, while ensuring that unique data is never accidentally deleted, post-processing de-duplication would be the choice.” This setup is necessary to understand a key point raised by NetApp about the questionable acceptability of de-duplicated

IT SenseJON WILLIAM tOIgO

Page 83: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

C

M

Y

CM

MY

CY

CMY

K

DinoSoftware_Angry-2-O_zJournal.Page 1 7/14/2008 12:48:59 AM

Page 84: Taking the Secure Migration Path to IT Virtualization › devpages › SPERA › zJAugu08.pdf · 2019-01-11 · Taking the Secure Migration Path to IT Virtualization ... 12 taking

zJOURNAL 8 X 10.75

08086_FDRMOVEad_zJourn3.qxd JUNE/JULY 2008 ISSUE

CORPORATE HEADQUARTERS: 275 Paterson Ave., Little Falls, NJ 07424 • (973) 890-7300 • Fax: (973) 890-7147E-mail: [email protected][email protected] • http:/ / www.innovationdp.fdr.com

EUROPEAN FRANCE GERMANY NETHERLANDS UNITED KINGDOM NORDIC COUNTRIESOFFICES: 01-49-69-94-02 089-489-0210 036-534-1660 0208-905-1266 +31-36-534-1660

VISIT US AT: SHARE TECHNOLOGY EXCHANGE SAN JOSE AUGUST 11 - 13 BOOTH #202

View ourProduct Demos online:

http://www.fdr.com/demo.cfm

� FDRPAS has been Providing Users with NON-DISRUPTIVE MIGRATION of DASD Since 2001…Proven in Over 800 Data Center Migrations

� Full Volume Migration…Speed…Hundreds of Volumes Migrated in a Few Hours

� FDRMOVE…Data Set Level Migration Allowing you to Combine Volumes Quickly

� VTOC Expansion…Even with Active Data Sets on the Volume

FDRMOVE & FDRPAS…Moving Data 24 x 7 x 365 with Virtually No Disruption!