ims concepts

227
IMS Concepts Hooman Tarahom ISC System Programmer Hooman Tarahom

Upload: hooman-tarahom

Post on 14-Jul-2015

271 views

Category:

Software


11 download

TRANSCRIPT

Page 1: IMS concepts

IMS Concepts

Hooman TarahomISC System Programmer

Hooman Tarahom

Page 2: IMS concepts

What is IMS?

• IMS stands for Information Management System.

• IMS has been around since 1965

• IMS is a second-generation database.

• First-generation, or “master file update” databases, are very slow and not flexible.

• Creating data hierarchy implies the most important data is stored first, the least important data last.

Hooman Tarahom

Page 3: IMS concepts

Interfaces to IMS

Hooman Tarahom

Page 4: IMS concepts

DB/DC Environment

Hooman Tarahom

Page 5: IMS concepts

The DCCTL environment

Hooman Tarahom

Page 6: IMS concepts

The DBCTL environment

Hooman Tarahom

Page 7: IMS concepts

The DB batch environment

Hooman Tarahom

Page 8: IMS concepts

Sequence fields and access paths

• Each segment normally has one field denoted as sequence field.

• The sequence fields in our subset should be unique in value for each occurrence of a segment type below its parent occurrence.

• Not every segment type need have a sequence field defined.

• Root segment need have a sequence field.• The sequence field is often referred to as the key

field, or simply the key.

Hooman Tarahom

Page 9: IMS concepts

Additional access paths to segments

• IMS provides two additional methods of defining access paths to a database segment. There are:– Logical relationships– Secondary indices

• Both provide a method for an application to have a different access path to the physical databases.

• Logical relationships allow a logical view to be defined of one or more physical databases. To the application this will look like a single database.

• Secondary indices give an alternate access path, via a root or dependent segment, to the database record in one physical database.

Hooman Tarahom

Page 10: IMS concepts

Logical relationships

• Through logical relationships, DL/I provides a facility to interrelate segments from different hierarchies.

• In doing so, new hierarchical structures are defined which provide additional access capabilities to the segments involved.

• The basic mechanism used to build a logical relationship is to specify a dependent segment as a logical child, by relating it to a second parent, the logical parent.

Hooman Tarahom

Page 11: IMS concepts

Logical relationships(Cont.)

Hooman Tarahom

Page 12: IMS concepts

Secondary indexing

Hooman Tarahom

Page 13: IMS concepts

Secondary indexing(Cont.)

• Some reasons you might want to use secondary indices are:– Quick access, particularly random access by online

transactions, by a key which is not the primary key the database has been defined with.

– Access to the index target segment without having to negotiate the full database hierarchy

– Ability to process the index database separately, for example by a batch process that needs to process only the search fields

Hooman Tarahom

Page 14: IMS concepts

Elements of the physical implementation

Hooman Tarahom

Page 15: IMS concepts

Segments, records, and pointers

Hooman Tarahom

Page 16: IMS concepts

Database organization types

Hooman Tarahom

Page 17: IMS concepts

database technologies

• IMS Database: often referred to as “DL/I database” or colloquially as Full Function databases

• IMS DEDBs: the data entry database, often referred to colloquially as “Fast Path Databases”

• IMS MSDB: Main storage databases

• IBM DB2: Db2 provide for relational databases

Hooman Tarahom

Page 18: IMS concepts

Data Evaluation

A master file database might have records like these(slashes(/) show fieldsBoundaries)There are five records for John Smith and four for Homer Simpson.Hierarchical databases can group all the data for John Smith into one record withTwo pars.

Hooman Tarahom

Page 19: IMS concepts

Data Evaluation(Cont.)

The first segment is called a root (or just a root). The second segment is calledA dependent or child segment. The root segment is a parent of the CLASSES segmentWe can reduce the size or our records by removing the name field from most of them And storing it in the root segment. In the picture bellow, a slash indicates a field boundary and bold double slashes(//) indicate a segment boundary

Hooman Tarahom

Page 20: IMS concepts

Data Evaluation(Cont.)

• Advantages to organizing data in this way:– Less space is needed to store the data. We need only one name field

for each student rather than one name field per class taken bystudent.

– Retrieval is quicker. You only need one I/O to get all of John Smith’sclass information rather than four, because all of his data is on onerecord.

– Retrieval is more flexible. If all you need is a list of student names, youcan read only the root segments. In this simple database, two I/Oguarantees you get all the students with no duplication. In the masterfile database all records must be read, plus an algorithm is needed todetermine when the program is done reading John Smiths.

– Security is better. If you want everyone to see a list of students, butnot everyone to see what classes students are taking IMS can do that.The master file allow full access or no access at all.

Hooman Tarahom

Page 21: IMS concepts

Describing an IMS database: the DBD

Hooman Tarahom

Page 22: IMS concepts

Student DBD

• Let’s use our existing knowledge to code a partial DBD for our student database.

• Here is a single record from our master file database. – SMITH, JOHN //ENGLISH COMP/A101/MR. CHIPS/

• Here’s how we chosen to divide the record:– Root segment: 20 bytes of text, full name.– Child segment: total length of 36 bytes, as:

• 17 bytes of text, course name;• 4 bytes of text, building indicator and room number;• 15 bytes of text, instructor name.

Hooman Tarahom

Page 23: IMS concepts

Student DBD(Cont.)

•Segment or field names can be up to eight alphanumeric characters. •Segment or field names do not have to be the same as COBOL field names.•A segment may be all key.•The key does not have to be in the first few bytes.•No fields are described except key fields.•The “PARENT=STUDENT” field tells us the CLASINFO segment is a child of the STUDENT segment. A root segment has “PARENT=0” because it has no parent.

Hooman Tarahom

Page 24: IMS concepts

IMS data storage: the IMS recordHere are John Smith’s five records again:

Here is John Smith’s one IMS record again:

The definition of an IMS record is, “A single root segment and all its dependent segments.” We can see the IMS data record is of variable length depending on how many dependent segments it has.You can request only one dependent segment of the many John Smith may have. To do so, you must provide the key of the child segment with the read request.

Hooman Tarahom

Page 25: IMS concepts

IMS Data Storage Rules

• You can have multiple child segments for a given parent. Any segment with children can have none, one, or many children attached to it. In our class database John Smith takes several classes.

• If a parent does not exist, its children cannot exist either. The DBD describes to IMS what could be, not what is. This means we could also have a student named Roger Ramjet, but we don’t have to. If Mr. Ramjet is not on our database, we cannot have segments describing what classes he is taking.

• If a parent segment is deleted all of its children are deleted also. This follows the same logic as the previous rule. If we delete Homer Simpson from the database all his class segments must also be removed. We must assume if he is not at school he is not in any classes.

• Before updating any record the program must do a GET HOLD(GH or GHU) for that record. The reason for this are quite simple. If the record isn’t on the database you can’t update it. One of the two things a GH does is to make sure the record you want to update actually exists.

Hooman Tarahom

Page 26: IMS concepts

Accessing an IMS database: Existing Records

• When accessing an IMS database for anything but an initial load you need to know a concept called position. IMS establishes its position in the database based on what it was last asked to do.

• When you open the dataset IMS has no position in the database. IMS will establish position for itself whenever a call is made to the database.

• You may do either qualified calls or non-qualified calls once IMS has established position:– Qualified calls(e.g. GET) return only the segment with the same

key as your request– Non-qualified calls(e.g. GET NEXT) act on the next segment in

the database’s hierarchy, regardless of key. The next segment is determined by position.

Hooman Tarahom

Page 27: IMS concepts

Accessing an IMS database: Existing Records(Cont.)

• Let’s do a run through a set of calls to see how this works. We’ll open the database then issue the call GET “SMITH, JOHN”. The segments you have immediate access to are John Smith’s root segment and his first CLASS segment. From this point several things can happen based on IMS’ position within database:– For a qualified call(G), the next segment retrieved is

determined by the call.– For an unqualified call(GN), you’ll get the next child

segment of John Smith’s if there is one. If there are no more child segments for John Smith, you’ll get a non-blank status code telling you there are no more child segments of that type.

Hooman Tarahom

Page 28: IMS concepts

Calls other that GET on Existing Records

• You can update or delete existing records or segments. You must issue a GET HOLD before you update or delete a segment to make sure it is there and to reserve access to the segment.

• The call to delete a segment is DLT. The call to update a segment is UPD. You must provide the key for the segment you are altering for both calls. To act on an entire record, specify the root segment and its key.

Hooman Tarahom

Page 29: IMS concepts

Accessing an IMS database: New Records

• A brand-new IMS database is no different from any other empty file. Because IMS imposes a structure on the database, though, it needs to keep track of where it has records and where it has free space. The first time an empty IMS database is accessed must be as an initial load.

• The initial load creates the space map IMS needs. It then loads all the records you provide.

• IMS needs at least one record to initialize a database. Of course, if you have more than one record IMS loads everything you give it, as long as the records match the description given in the DBD.

• If the database has records in it, an ADD or ISRT call adds the new record or segment to the database. Whether it is a segment or record being added depends on how the instruction is coded and what data is being passed to IMS. The ADD call requires all key information for all segments affected by the call.

• If you have an existing IMS database do not try to load it. When IMS loads a database it assumes the file is unformatted. When IMS builds the space map, it treats the entire database as if it has no records in it. If there were records, you lost then.

Hooman Tarahom

Page 30: IMS concepts

Control Blocks

• When you create an IMS database, you must define the database structure and how the data can be accessed and used by application programs.

• These specifications are defined within the parameters provided in two control blocks, also called DL/I control blocks:– Database description(DBD)

– Program specification block(PSB)

• Information from DBD and PSB is merged into third control block, the application control lock(ACB).

Hooman Tarahom

Page 31: IMS concepts

IMS data storage: DBD

• A non-IMS dataset must specify both a record size and a BLKSIZE(or CI size in VSAM). Record size aren’t needed in IMS because the DBD tells IMS how long each segment is. Let’s look at our DBD again:

•Each SEGM statement in the DBD names a segment. The BYTES operand of the SEGM statement tells IMS how many bytes of data are in that segment.

Hooman Tarahom

Page 32: IMS concepts

IMS data storage: DBD(Cont.)

• IMS doesn’t need the block length field either, because it’s also coded in the DBD.

• On the DATASET statement is an operand SIZE. This specifies the OS BLKSIZE of the file. A mutually exclusive operand, BLOCK, also provides this information in a slightly different way. Using SIZE is recommended because you get better control over the BLKSIZE of your file.

Hooman Tarahom

Page 33: IMS concepts

IMS data storage: DBD(Cont.)

• You should never code an LRECL or BLKSIZE when creating an IMS dataset because the DBD fully describes the dataset. JCL to load a new IMS database should have the DDname look something like this:

Hooman Tarahom

Page 34: IMS concepts

Accessing your database

• You can tell IMS which segment or segments you want the user to see.

• A PSB must be coded to allow access to your database.

• PSB is an abbreviation for Program Specification Block. A PSB contains one or more PCBs, or Program Control Blocks. A PSB usually contains one PCB for each database DBD accessed. Here is an example of a PSB:

Hooman Tarahom

Page 35: IMS concepts

How to read a PSB

• On the PCB statement is the name of the database’s DBD, a PROCOPT field, and a KEYLEN field.

• The PROCOPT field is one way of allowing or preventing access to the database. PROCOPT=GP means this PSB allows the database to be accessed as read only, and can get all segments for a given parent at once.

• Common PROCOPTs you may see are L(load,) G(read), GO or GON(read only), and A, any access.

• No access is allowed with “L” except to load an empty database.

• A PROCOPT of “A” is understood to be update access.

Hooman Tarahom

Page 36: IMS concepts

How to read a PSB(Cont.)

• The value of KEYLEN is the number of bytes needed by IMS to store the largest key path of this database.

• What does that mean exactly?

• Recall the DBD’s segment description. While not all segments have keys, most of them do. IMS doesn’t just use the key of the segment we want; it also uses the key of any segment above it in the hierarchy. Thus IMS has a work area to store all the keys for any given qualified call. This work area must be able to handle the largest size of the combined keys for any request.

Hooman Tarahom

Page 37: IMS concepts

The PSB: KEYLEN parameter

• Here again is the HDAMDBDP DBD, minus the assembler instructions:

• This is a simple hierarchy; all segments have the same parent. They KEYLEN parameter for this database would be 17 (11 for the root, 0 for DEPSEG1, and 6 for DEPSEG2.

Hooman Tarahom

Page 38: IMS concepts

Accessing your IMS database-Batch

• To access an IMS database in a batch job you need two things: a DBD (to describe the database) and a PSB (to describe how to access the database). Both of these modules exist in load libraries. The process to get a DBD into its proper load library is a DBDGEN. For a PSB the process is a PSBGEN.

• Once the DBD- and PSBGEN have been done, you are ready to access your database. To access the libraries where these two components are stored we use the IMS DD statement, like this:

Hooman Tarahom

Page 39: IMS concepts

Accessing your IMS database-Batch(Cont.)

• IMS must be able to allocate the database datasets. There are two ways IMS can allocate your database: dynamicallyor statically. To allocate your database statically, specify a DDname and the name of the database dataset. The DDname must be the same as that specified in the DBD. For our HDAMDBDP database, the JCL statement looks like this:

• The IMS region obviously cannot have all database DD names coded in its JCL. The IMS region allocates the file dynamically. To do that, another load module must be created. This is called “dynamic allocation creation” and is don by the IMS online team. The module tells IMS what dataset to allocate when your PSB is opened.

Hooman Tarahom

Page 40: IMS concepts

Accessing your IMS database from the IMS region

• The IMS online environment also requires a DBD and a PSB. However, IMS calls are processed differently in the region. A batch job step is processed as one transaction. Many people online may need to do the same function simultaneously. In online we have the equivalent of running several copies of the same batch job at the same time. Unless that is managed properly there is contention.

• Each transaction requires its own copy of the PSB and DBDs. As you can imagine, if hundreds of people are running the same transaction you’ll have hundreds of copies of the same blocks in memory. IMS gets around the problem by creating an access control block(ACB).

• Unlike PSBs or DBDs, an ACB can be shared by different transactions. An ACB is load module consisting of a PSB and all needed DBDs for a given transaction. To build an ACB requires an ACBGEN.

Hooman Tarahom

Page 41: IMS concepts

IMS access methods

Hooman Tarahom

Page 42: IMS concepts

Data Storage

• IMS databases have data overhead.• When you browse an ordinary data file all you see are data records. When

you browse an IMS dataset, though, you see every extra field IMS has added.

• The first record in a flat file is always a data record. In an IMS OSAM file, the first record is a space map of the IMS database.

• Each block in an HD database begins with Free Space Element Anchor Point(FSEAP). This is maintained by IMS and tells IMS where the first free space element is in its particular block. FSEAPs are four bytes long.

• Each block in an HD database has a variable-length Anchor Point Area immediately following the FSEAP. This area contains one or more four-byte pointers. These pointers are called Root Anchor Points(RAPs) and are used differently in HIDAM and HDAM.

• Except for the space map block, all blocks in an HD database store their data immediately following the Anchor Point Area.

Hooman Tarahom

Page 43: IMS concepts

HDAM database — free space management

Hooman Tarahom

Page 44: IMS concepts

HDAM data access: the Randomizer

• IMS needs a method to know where its data is stored so it can be retrieved later. For HDAM databases a program called a randomizer provides the method.

• The randomizer takes the root key and, based on the number of blocks in the file, tells IMS where to put the record and where to find the record after it is written.

• Look at the RMNAME parameter on the DBD statement:

Hooman Tarahom

Page 45: IMS concepts

HDAM data access: the Randomizer(Cont.)

• In order, the RMNAME parameters specify the following items:– The randomizer’s name.– RAPs per block. A RAP is a Root Anchor Point, a pointer to the location in

the block of a given root segment.– The total number of blocks in the database.– A byte limit, the largest record size to be stored in any given block.

Segments of records longer than this are immediately sent to overflow.

• The randomizer return the same location for a given key every time as long as the RAP and number of blocks operands remains the same.

• If you can’t get a record you know is in the database, chances are you’re using an incorrect version of the DBD.

• Since the number of blocks specifies the file size, to change the file size a reorg is needed to change the DBD and rearrange the data.

Hooman Tarahom

Page 46: IMS concepts

HDAM — database in physical storage

Hooman Tarahom

Page 47: IMS concepts

HDAM advantages

• Fast random access to the root segments, via the randomizer

• Quick access to segments in a database record, as IMS attempts to store them in the same, or physically near, CI/block

• Automatic re-use of space after segment deletions

• Can have non-unique root segment keys

Hooman Tarahom

Page 48: IMS concepts

HDAM weaknesses

• It is not possible to access the root segments sequentially, unless you create a randomizing module that randomizes into key sequence, or incur the overhead of creating and maintaining a secondary index

• It is slower to load than HIDAM, unless you sort the segments into randomizer sequence (for example by writing user exits for the sort utility that call the randomizing module).

• It is possible to get poor performance if too many keys randomize to the same anchor point.

Hooman Tarahom

Page 49: IMS concepts

HIDAM data access

• HIDAM databases have two datasets: an index dataset and a data dataset.

• The index dataset is a VSAM KSDS whose records consists of the root segment’s key and a four-byte pointer field. This field is an address within the data dataset. The address is the location of the start of the entire record.

• To get a HIDAM record, IMS reads the index KSDS as if it were a native VSAM file. It takes the address it finds in the index record and then goes to that location in the data dataset(ESDS or OSAM file).

Hooman Tarahom

Page 50: IMS concepts

HIDAM database in physical storage

Hooman Tarahom

Page 51: IMS concepts

Inserting Root Segments: HIDAM

• HDAM database records are inserted where the randomizer says they belong. For HIDAM, however, the process is different.– The index is searched for the first, higher root key.

– The new index record is inserted in ascending root sequence.

– After the index record is created, the record is stored in the database at the first available space. The index pointer is updated to reflect the storage location after the data is stored.

Hooman Tarahom

Page 52: IMS concepts

Inserting Dependent Segments

• After initial load, insertion of segments operates the same way in both HDAM and HIDAM. This may means the database record becomes fragmented. In other words, some child segments may be stored quite far away from the root or related child segments. This is the main reason for doing reorgs.

Hooman Tarahom

Page 53: IMS concepts

Deleting Segments

• In HDAM, when any segment is deleted it is physically removed from the database. This allows free space in the file to be reused.

• In HIDAM when a segment is deleted it is marked as deleted but not removed from the database. A reorg is necessary to remove deleted records.

Hooman Tarahom

Page 54: IMS concepts

HIDAM advantages

• Ability to process the root segments and database records in root key sequence.

• Quick access to segments in a database record, as IMS attempts to store them in the same, or physically near, CI/block.

• Automatic re-use of space after segment deletions.

• Ability to reorganize the HIDAM primary index database in isolation from the main HIDAM database (but NOT the other way round).

Hooman Tarahom

Page 55: IMS concepts

HIDAM weaknesses

• Longer access path, compared to HDAM, when reading root segments randomly by key. There will be at least one additional I/O to get the HIDAM primary index record, before reading the block containing the root segment (excluding any buffering considerations).

• Extra DASD space for the HIDAM primary index.• If there is frequent segment insert/delete activity,

the HIDAM primary database will require periodic reorganization to get all database records back to there root key sequence in physical storage.

Hooman Tarahom

Page 56: IMS concepts

When to choose HDAM

• HDAM is recognized, in practice, to be the most efficient storage organization of the DL/I.

• if there are not requirements to process a large section of the database in key sequence, then HDAM should be chosen.

Hooman Tarahom

Page 57: IMS concepts

When to choose HIDAM

• HIDAM is the most common type of database organization.

• It has the advantages of space usage like HDAM but also keeps the root keys available in sequence.

• With the speed of DASD the extra read of the primary index database can be incurred without much overhead. The most effective way to do this is to specify specific buffer pools for use by the primary index database, thus reducing the actual IO to use the index pointer segments.

Hooman Tarahom

Page 58: IMS concepts

Hierarchical Sequential (HS) access methods

• The two hierarchical Sequential (HS) access methods, HSAM and HISAM have now been superseded by the HD access methods.

• The HSAM access method will not allow updates to a database after it was initially loaded and the database can only be read sequentially.

• The HISAM access method offers similar functionality to HIDAM, but has poorer internal space management than the HD access methods that would normally result in more I/O to retrieve data, and the need to reorganize the databases much more frequently.

Hooman Tarahom

Page 59: IMS concepts

When to choose HISAM

• HISAM is not a very efficient database organization.

• All HISAM databases can easily be converted to HIDAM.

• The only situation where HISAM may be desirable over a HIDAM database is when it is a root-segment-only database.

Hooman Tarahom

Page 60: IMS concepts

Operating system access methods

Hooman Tarahom

Page 61: IMS concepts

Operating system access methods

• Virtual Sequential Access Method (VSAM). Two of the available VSAM access methods are used, Key Sequenced Data Sets (KSDS) for Index databases, and Entry Sequenced Data Sets (ESDS) form the primary data sets for HDAM, HIDAM, etc. The data sets are defined using the VSAM Access Method Services (AMS) utility program.

• Overflow Sequential Access Method (OSAM) — This access method is unique to IMS and is delivered as part of the IMS product. It consists of a series of channel programs that IMS executes to use the standard operating system channel I/O interface. The data sets are defined using JCL statements. As far as the operating system is concerned, an OSAM data set is described as a physical sequential data set (DSORG=PS)

Hooman Tarahom

Page 62: IMS concepts

When using OSAM

• Reasons you may want to use OSAM are:

– Sequential Buffering (SB) — With this feature, IMS will detect when an application is processing data sequentially and pre-fetch blocks it expects the application to request from DASD, so they will already be in the buffers in the IMS address space when the application requests segments in the block.

Hooman Tarahom

Page 63: IMS concepts

IMS batch performance: Buffers

• There are three types of datasets IMS can use as databases:– HDAM- can use an OSAM or VSAM ESDS file;

– HIDAM-can use and OSAM or VSAM ESDS file for its data dataset, and must use a VSAM KSDS as its index dataset.

• Using buffers on datasets can greatly improve performance. Since VSAM and OSAM use different buffering techniques, you need to know which type of file you’re using.

Hooman Tarahom

Page 64: IMS concepts

IMS batch performance: Buffers(Cont.)

• The first read request for any file works something like this:1. A read requested via an OS call2. The DASD is read to retrieve the specified record. The dataset

block containing the record is placed into an area of storage called a buffer.

The program is then free to process the record. Subsequent I/O requests add an additional step just prior to reading the DASD.

1. A read is requested via an OS call.2. Any buffers assigned to the dataset are checked for the record. If

the record is in the buffer it is returned from there. If the record is not in the buffers, only then…

3. The DASD is read to retrieve the specified record. The dataset block containing the record is placed into a buffer.

Hooman Tarahom

Page 65: IMS concepts

IMS batch performance: Buffers(Cont.)

• Buffers are assigned to all files opened by a program. The default number of buffers provided by the OS is five for a “flat” file(i.e. non-VSAM), and two buffers for a VSAM ESDS. For a KSDS the OS provides one index buffer as well. For a KSDS the OS provides one index buffer as well.

• When a program runs under IMS’ control, if no buffers are requested IMS provides three buffers for each IMS dataset specified. If any buffers are given to IMS that is all you get.

• IMS does not use the OS-provided buffers for access to IMS databases. Batch jobs should not provide any OS or VSAM buffers for IMS databases. Doing so may actually hurt the job’s performance because the unused buffers take up memory the program could use otherwise.

Hooman Tarahom

Page 66: IMS concepts

IMS batch performance: Buffers(Cont.)

• Batch job buffers are supplied to IMS via the DFSVSAMP DD statement. DFSVSAMP buffer parms have this format.

VSRBUF=xxxxx,yyIOBF=(XXXXX,YY,Y/N,Y/N,name)

• Bold characters must be specified, underline characters are defaults, and plain text are optional arguments. The “xxxxx” is the size of the buffer and the “yy” is the number of buffers to be used. The “name can be anything up to eight characters, as long as the first character is a letter.. Here is a typical DFSVSAMP parm.

Hooman Tarahom

Page 67: IMS concepts

IMS batch performance: Buffers(Cont.)

• Buffers designated for VSAM IMS datasets are numbers only, while OSAM buffers begin with IOBF. Keep these three things in mind when coding buffers:– Each buffer can contain 1 block(OSAM) of CI(VSAM)

from any file.

– An unnamed buffer area is not attached to any particular file, but can be used by any file with that BLKSIZE/CISIZE or a smaller one.

– Buffers specified here are applied to IMS databases only

Hooman Tarahom

Page 68: IMS concepts

IMS batch performance: Buffers(Cont.)

• You can see one of the buffer parameters is named. Named parameters should always be followed by a DBD= parameter so the pairs are obvious. Naming a buffer pool lets the user specify how many buffers are dedicated to the named database.

• In this example, the first IOBF line builds a pool called BP1 with eighty 8K OSAM buffers. The next line tells IMS to assign the pool to the HDAMDBDP database. The idea here is to keep a large portion of this database in memory while being used, reducing physical I/O to the file. No other database can use this pool because the pool is named and no other database references this name(BP1).

Hooman Tarahom

Page 69: IMS concepts

Two final comments on IMS buffers

• IMS buffers can be one of the following sizes: 512, 1024, 2048, or any multiple of 4096 bytes up to 32768(32k).

• IMS uses a “best fit” method to decide what buffers a file uses. If a file has the same BLKSIZE or CISIZE as a buffer pool IMS assigns the file there. If the file has a BLKSIZE or CISIZE the parm doesn’t list, IMS uses the smallest listed buffer size large enough to hold an entire block.

• For example, with this parameter a file with a BLKSIZE of 13682 would uses one or more of the forty 16k buffers.

Hooman Tarahom

Page 70: IMS concepts

DFSDDLT0

• This program is also called DLT/0.

• it is an IMS program you can use to make IMS calls(e.g. ISRT, DLT).

• It is the program of choice for initializing an IMS database.

• The basic strategy is to add a record using ISRTand then delete the just-added record using DLT.

Hooman Tarahom

Page 71: IMS concepts

IMS checkpoints: preserving application data integrity

• Long running programs should issue checkpoints based on the number of database calls made

• As a rule of thumb, initially issue batch checkpoints at about every 500 database calls

• You do not want to checkpoint too frequently, as there is an overhead in writing out all updates

• For applications running in Batch and BMP address spaces, there is also extended checkpoint functionality available

Hooman Tarahom

Page 72: IMS concepts

Locking: sharing IMS data between multiple tasks

• A deadlock. IMS detects this, and will abnormally terminate (abend) the application it assesses has done the least work, backing out its updates to the last commit point.

• What IMS cannot detect is a deadlock between two applications where the two different resources the applications are trying to get are being managed by two separate resource managers

Hooman Tarahom

Page 73: IMS concepts

Locking: sharing IMS data between multiple tasks(Cont.)

• For example, CICS task A reads, and enqueues a database record. CICS task B then issues a CICS ENQ for a resource, for example to serialize on the use of a TDQ. CICS task B then attempts to read the database record held by task A, and is suspended, waiting for it. CICS task A then attempts to serialize on the resource held by task B and is suspended. We now have a deadlock between task A and B. But neither IMS or CICS is aware of the problem, as both can only see the half of the deadlock they are managing

• The person designing an application that uses IMS databases needs to be aware of the potential problems with database deadlocks, and design the application to avoid them.

Hooman Tarahom

Page 74: IMS concepts

Locking: sharing IMS data between multiple tasks(Cont.)

• IMS supports three methods of sharing data between a number of application tasks:

• Program isolation (PI) — This can be used where all applications are accessing the IMS databases via a single IMS control region.This provides the lowest level of granularity for the locking, and the minimum chance of a deadlock occurring

• Block level data sharing — This allows any IMS control region or batch address space running on an OS/390 system to share access to the same databases. It uses a separate feature, the Internal Resource Lock Manager, IRLM

• Sysplex data sharing — Where a number of OS/390 systems are connected together in a sysplex, with databases on DASD shared by the sysplex, it is possible for IMS control regions and batch jobs to run on any of these OS/390 images and share access to the databases

Hooman Tarahom

Page 75: IMS concepts

Why is reorganization necessary

• To reclaim and consolidate free space that has become fragmented due to repeated insertion and deletion of segments.

• To optimize the physical storage of the database segments for maximum performance (get dependent segments that are in distant blocks, increasing physical I/O, back in the same block as the parent and/or root). This situation is normally the result of high update activity on the database.

• To alter the structure of the database, change the size of the database data sets, alter the HDAM root addressable area, add or delete segment types.

Hooman Tarahom

Page 76: IMS concepts

When to reorganize

• There are two approaches to deciding when to reorganize, reactive and proactive

• When you initially install the application and set up the databases, a lot of the reorganization will be done reactively, as performance and space problems manifest themselves

• As you develop a history of the behavior of the application and the databases, the scheduling of reorganization should become more proactive

Hooman Tarahom

Page 77: IMS concepts

When to reorganize(Cont.)

• The initial thing to look at is, what the average and maximum online response times and batch run times are.

• also look at the amount of database calls it is processing• The solution to performance problems is normally an

interactive process involving the database administrator, application support function, and the operating system support function, as all three control areas that affect performance

• The main things you will be doing when you look at the monitoring data will be to try to minimize the physical I/O for each database access, and optimize the free space available in the database

Hooman Tarahom

Page 78: IMS concepts

Minimize physical I/O

• Making the best use of buffers in the IMS subsystem; the more requests for database access you satisfy from the buffers, the fewer physical I/Os are necessary

• Minimizing the number of physical I/Os when a segment does have to be retrieved from disk. For example, trying to place as many dependents as possible in the same block/CI as its parent, ensuring HDAM root segments are in the same block/CI as the RAP. This is where database reorganization and restructuring is used

Hooman Tarahom

Page 79: IMS concepts

Monitoring the databases

• The IMS monitor, to gather details of buffer usage and database calls over a specified time period in an IMS subsystem.

• The //DFSSTAT DD card, used in batch JCL to provide a summary of buffer usage and database calls. As there is very little overhead in including this (the details printed to the DD at region termination are accumulated by the IMS region controller whether they are output or not), it is normally worthwhile putting this in all batch jobs.

• Running the DB monitor on a batch job, to collect similar details to the IMS monitor in an online system. As there is an overhead on running this, it would normally only be turned on when specific problems are being investigated

Hooman Tarahom

Page 80: IMS concepts

Database recovery processing

• Database recovery, in its simplest form, is the restoration of a database after its (partial) destruction due to some failure

• Periodically, a copy of the data in the database is saved. This copy is normally referred to as a backup or image copy

• In addition to taking an image copy of the database(s), all changes made to the data in the database can be logged and saved, at least until the next image copy. These changes are contained in data sets called log data sets

• There is an IMS facility called database recovery control (DBRC) which provides database integrity and can be used to help ensure that there is always a recovery process available. The use of DBRC to control database backup and recovery is not mandatory, but is highly recommended

Hooman Tarahom

Page 81: IMS concepts

When is recovery needed ?

1. A DLI batch update job fails after making at least one database update.

2. A failure has occurred on a physical DASD device.

3. A failure has occurred in a database recovery utility.

4. A failure of dynamic backout or batch backout utility has occurred.

5. An IMS online system failure and emergency restart has not been completed.

Hooman Tarahom

Page 82: IMS concepts

Online programs recovery

• IMS online transactions use dynamic backout to “undo” updates done in any incomplete unit of work

• Abending online programs are automatically backed out by the online system using the log records

• In addition, if the system should fail while an application program is active, any updates made by that program will be automatically backed out when the system is restarted

• At IMS restart time, if the emergency restart cannot complete the backout for any individual transactions, then the databases affect by those updates are stopped, and DBRC is requested to set the recovery needed flag to ensure that a correct recovery is completed before the database is opened for more updates

• In the case of dynamic backout failure, a batch backout or database recovery needs to be performed, depending on the reason for the backout failure

Hooman Tarahom

Page 83: IMS concepts

Overview of recovery utilities

Hooman Tarahom

Page 84: IMS concepts

Database image copy utility (DFSUDMP0)

• The database image copy utility creates a copy of the data sets within the databases. the output data sets is called an IMAGE COPY

• It is a sequential data set and can only be used as input to the database Recovery utility

• The IMAGE copy utility does not use DLI to process the database. Track I/O is used

• There is no internal checking to determine if all the IMS internal pointer are correct

Hooman Tarahom

Page 85: IMS concepts

Database recovery utility (DFSURDB0)

• The database recovery utility program will restore a database data set

• Unlike the image copy utility, the recovery utility recovers one database data set per job step

• The database recovery utility program is executed in a DL/I batch region

Hooman Tarahom

Page 86: IMS concepts

Database batch backout utility (DFSBBO00)

• Batch backout, in it simplest form, is the reading of log data set(s) to back out all database updates

• This is done by using the “before image data” in the log record to re-update the database segments

• No other update programs should have been executed against the same database (s) between the time of the failure and the backout

• The program operates as a normal DL/I batch job• It uses the PSB used by the program whose

effects are to be backed out

Hooman Tarahom

Page 87: IMS concepts

Controlling IMS

• Monitoring the system

• Processing IMS system log information for analysis

• Executing recovery-related functions

• Modifying and controlling system resources

• Controlling data sharing

• Controlling log data set characteristics

Hooman Tarahom

Page 88: IMS concepts

Monitoring the system

• Monitor the status of the system on a regular schedule to gather problem determination and performance information.

• For example, to determine if you should start an extra message region, you might monitor the status of the queues during peak load.

Hooman Tarahom

Page 89: IMS concepts

Processing IMS system log information for analysis

• Using IMS system log utilities– File Select and Formatting Print utility: You can use the File Select and

Formatting Print utility (DFSERA10) if you want to examine message segments or database change activity in detail

– Fast Path Log Analysis utility: DBFULTA0 to prepare statistical reports for Fast Path, based on data recorded on the IMS system log

– Log Transaction Analysis utility: In an IMS DB/DC or DCCTL environment, you can use the Log Transaction Analysis utility (DFSILTA0) to collect information about individual transactions

– Statistical Analysis utility: In an IMS DB/DC or DCCTL environment, you can produce several summary reports using the IMS Statistical Analysis utility (DFSISTS0)

– Knowledge-Based Log Analysis: Knowledge-Based Log Analysis (KBLA) is a collection of IMS utilities that select, format, and analyze log records. KBLA also provides an ISPF interface

Hooman Tarahom

Page 90: IMS concepts

Executing recovery-related functions

• The /RMxxxxxx commands perform the following DBRC functions:– Record recovery information in the RECON data set

– Generate JCL for various IMS utilities and generate user-defined output

– List general information in the RECON data set

– Gather specific information from the RECON data set

• Recommendation: Allow operators to use the /RMLIST and /RMGENJCL commands. Restrict the use of /RMCHANGE, /RMDELETE, and /RMNOTIFY commands because they update the RECON data set.

Hooman Tarahom

Page 91: IMS concepts

Program structure

• Application programs executing in an online transaction environment are executed in a dependent region called the message processing region (MPR)

• The programs are often called message processing programs (MPP)• The IMS modules which execute online services will execute in the

control region (CTL)• the database services will execute in the DLI separate address space

(DLISAS)• Batch application programs can execute in two different types of

regions– Application programs which need to make use of message processing

services or databases being used by online systems are executed in a batch message processing region (BMP)

– Application programs which can execute without messages services execute in a DLI batch region

Hooman Tarahom

Page 92: IMS concepts

Structure of an application program

Hooman Tarahom

Page 93: IMS concepts

Entry to application program

• when the operating system gives control to the IMS control facility, the IMS control program in turn passes control to the application program

• At entry, all the PCB-names used by the application program are specified

• The sequence of PCBs in the linkage section or declaration portion of the application program need not be the same as in the entry statement

Hooman Tarahom

Page 94: IMS concepts

Termination

• At the end of the processing of the application program, control must be returned to the IMScontrol program

• Language:COBOL Return Statement: GOBACK.

Hooman Tarahom

Page 95: IMS concepts

Calls to IMS

• Actual processing of IMS messages, commands, databases and services are accomplished using a set of input/output functional call requests

• The argument list will consists of the following parameters

– Function call

– PCB name

– IOAREA

– Segment search argument (SSA) (database calls only)

Hooman Tarahom

Page 96: IMS concepts

PCB mask

• A mask or skeleton database PCB must provide in the application program

• One PCB is required for each data structure• As the PCB does not actually reside in the application

program, care must be taken to define the PCB mask as an assembler dsect, a COBOL linkage section entry, or a PL/I based variable

• The PCB provides specific areas used by IMS to inform the application program of the results of its calls

• At execution time, all PCB entries are controlled by IMS• The PCB masks for an online PCB and a database PCB

are different

Hooman Tarahom

Page 97: IMS concepts

Application PSB structure

Hooman Tarahom

Page 98: IMS concepts

Database PCB mask

Hooman Tarahom

Page 99: IMS concepts

Status code handling

• After each IMS call, a two-byte status code is returned in the PCB which is used for that call

• Three categories of status code:

– The blank status code, indicating a successful call

– Exceptional conditions and warning status codes from an application point of view

– Error status codes, specifying an error condition in the application program and/or IMS

Hooman Tarahom

Page 100: IMS concepts

IVP sample application table

• The SDFSISRC target library contains the source for programs, PSBs, DBDs, and MFSs, and other supporting materials that are used by the application.

Hooman Tarahom

Page 101: IMS concepts

IMS sample application parts

Hooman Tarahom

Page 102: IMS concepts

Hooman Tarahom

Page 103: IMS concepts

Hooman Tarahom

Page 104: IMS concepts

Hooman Tarahom

Page 105: IMS concepts

Hooman Tarahom

Page 106: IMS concepts

Hooman Tarahom

Page 107: IMS concepts

Testing the system

Hooman Tarahom

Page 108: IMS concepts

Message format service overview

• Through the message format service (MFS), a comprehensive facility is provided for IMS users of 3270 and other terminals/devices

• MFS has three major components– MFS language utility– MFS pool manger– MFS editor

• The MFS language utility is executed offline to generate control blocks and place them in a format control block data set named IMSVS.FORMAT

• The IMS message format service (MFS) is always used to format data transmitted between IMS and the devices of the 3270 information display system

Hooman Tarahom

Page 109: IMS concepts

Database recovery control (DBRC)

• DBRC includes the IMS functions which provide IMS system and database integrity and restart capability

• DBRC records information in a set of VSAM data sets called RECONs

• Two of these RECONs are a pair of VSAM clusters which work as a set to record information

• A third RECON can be made available as a spare• If one becomes unavailable, the spare will be

activated if it is available

Hooman Tarahom

Page 110: IMS concepts

RECON information

• IMS records the following information in the RECON:– Log data set information– Database data set information– Event information

• Allocation of a database• Update of a database• Image copy of a database• Abend of a subsystem• Recovery of a database• Reorganization of a database• Archive of a OLDS data set

Hooman Tarahom

Page 111: IMS concepts

RECON data sets

• The RECON data set is the most important data set for the operation of DBRC and data sharing

• The fundamental principle behind the RECON data set is to store all recovery related information for a database in one place

• Using three data sets for the RECON causes DBRC to use them in the following way– The first data set is known as copy1. It contains the current information. DBRC

always reads from this data set, and when some change has to be applied, the change is written database first to this data set

– The second data set is known as copy2. It contains the same information as the copy1 data set. All changes to the RECON data set are applied to this copy2 only after the copy1 has been updated

– The third data set (the spare) is used in the following cases• A physical I/O error occurs on either copy1 or copy2. • DBRC finds, when logically opening the copy1 RECON data set, that a spare RECON has

became available, and that no copy2 RECON data set is currently in use. • The following command is executed: CHANGE.RECON REPLACE(RECONn)

Hooman Tarahom

Page 112: IMS concepts

RECON definition and creation

• The RECON data sets are VSAM KSDSs. They must be created by using the VSAM AMS utilities

• The same record size and CI size must be used for all the RECON data sets

• The RECON data sets should be given different FREESPACE values so that CA and CI splits do not occur at the same time for both active RECON data sets

• For availability, all three data sets should have different space allocation specifications

• The spare data set should be at least as large as the largest RECON data set

Hooman Tarahom

Page 113: IMS concepts

Initializing RECON data sets

• After the RECON data sets are created, they must be initialized by using the INIT.RECON command of the DBRC recovery control utility

• This causes the RECON header records to be written in both current RECON data sets

Hooman Tarahom

Page 114: IMS concepts

Allocation of RECON data sets to subsystems

• To allocate the RECON data set to an IMS subsystem, the user must choose one of the following two ways:

– Point to the RECON data sets by inserting the DD statements in the start-up JCL for the various subsystems.

– Use dynamic allocation.

Hooman Tarahom

Page 115: IMS concepts

Allocation of RECON data sets to subsystems(Cont.)

• It also allows recovery of a failed RECON data set, since DBRC dynamically de-allocates a RECON data set if a problem is encountered with it

• To establish dynamic allocation, a special member naming the RECON data sets must be added to IMS RESLIB or to an authorized library that is concatenated to IMS RESLIB. This is done using the IMS DFSMDA macro

• The appropriate RESLIB or concatenated RESLIBs must be included for each subsystem start-up JCL

Hooman Tarahom

Page 116: IMS concepts

Hooman Tarahom

Page 117: IMS concepts

Hooman Tarahom

Page 118: IMS concepts

Hooman Tarahom

Page 119: IMS concepts

Placement of RECON data sets

• Different volumes

• Different control units

• Different channels

• Different channel directors

Hooman Tarahom

Page 120: IMS concepts

RECON data set maintenance

• RECON backup– Operational procedures should be set up to ensure that regular

backups of the RECON data set are taken – These backups should be performed using the BACKUP.RECON DBRC

utility command

• DELETE.LOG INACTIVE command– The only recovery related records in the RECON data set that are not

automatically deleted are the log records (PRILOG and LOGALL).

• LIST.RECON STATUS command– Regular use should be made of the LIST.RECON STATUS command to

monitor the status of the individual RECON data sets– This command should be executed two or three times a day during the

execution of an online system, to ensure that no problems have been encountered with these data sets

Hooman Tarahom

Page 121: IMS concepts

RECON reorganization

• A plan for reorganizing the RECON data sets to reclaim this space on a regular basis must be considered

• If the online system is active:• A reorganization of the RECON data sets should be scheduled:

– During a period of low RECON activity.– When no BMPs are running.– A LIST.RECON STATUS command must be issued from each online

system which uses the RECON data sets, after the CHANGE.RECON REPLACE command is issued, in order to de-allocate the RECON before deleting and defining it again.

• If the online system is not active:• A reorganization of the RECON data sets should be scheduled:

– After a BACKUP.RECON has been taken.– When no subsystems are allocating the RECON data sets.

Hooman Tarahom

Page 122: IMS concepts

Recreating RECON data sets

• The RECON data sets may need to be recreated, for instance:– In a disaster recovery site– After the loss of all the RECON data sets when no

current backup is available

• Recreating the RECON can be a long and slow process. When designing procedures to handle this process, there are two basic alternatives:– Restore the RECON from the last backup (if available)

and update it to the current status required.– Recreate and re initialize the RECON data sets.

Hooman Tarahom

Page 123: IMS concepts

Hooman Tarahom

Page 124: IMS concepts

RECON reorganization steps

1. Enter the DBRC command to determine the status of the RECON data sets2. Issue CHANGE.RECON REPLACE(RECON1) to discard RECON1. RECON2 will be copied to RECON3

and RECON1 will be discarded3. Delete and define the VSAM cluster for the discarded RECON. Before you can delete the cluster,

you must ensure there are no other users, such as batch jobs, using the data set4. Issue the CHANGE.RECON REPLACE(RECON2) command to discard RECON2. RECON3 will be

copied to RECON1 and RECON2 will be discarded5. Delete and define the VSAM cluster for the discarded RECON2. Before you can delete the cluster,

you must ensure there are no other users using the data set6. Issue the CHANGE.RECON REPLACE(RECON3) command to discard RECON2. RECON3 will be

copied to RECON1 and RECON2 will be discarded7. Delete and define the VSAM cluster for the discarded RECON3. Before you can delete the cluster,

you must ensure there are no other users using the data set• You can automate this process by putting all the previous steps in the same job, but you need to

schedule the job at a time when there are no other subsystems using the RECON (or they must be able to unallocate the RECONs as they get DISCARDED).You must also ensure that the discarded RECON is released by all the sharing online systems. With only one online subsystem and no other users (batch jobs or utilities), you can perform the /DIS OLDS command, for example, by executing an Automated Operator BMP step that issues the command

• You can reorganize a RECON data set online if you are using dynamic allocation for the RECON data set by using the CHANGE.RECON command

Hooman Tarahom

Page 125: IMS concepts

Hooman Tarahom

Page 126: IMS concepts

Hooman Tarahom

Page 127: IMS concepts

Hooman Tarahom

Page 128: IMS concepts

Hooman Tarahom

Page 129: IMS concepts

Hooman Tarahom

Page 130: IMS concepts

Hooman Tarahom

Page 131: IMS concepts

Hooman Tarahom

Page 132: IMS concepts

Hooman Tarahom

Page 133: IMS concepts

Hooman Tarahom

Page 134: IMS concepts

Hooman Tarahom

Page 135: IMS concepts

Hooman Tarahom

Page 136: IMS concepts

Hooman Tarahom

Page 137: IMS concepts

Hooman Tarahom

Page 138: IMS concepts

Hooman Tarahom

Page 139: IMS concepts

Hooman Tarahom

Page 140: IMS concepts

Summary of recommendations for RECON data sets

• Use three RECON data sets — two current and one spare.

• Define the three RECON data sets with different space allocations.

• Put the RECON data sets on different devices, channels, and so on.

• Use dynamic allocation.• Do not mix dynamic allocation and JCL allocation.• Define the RECON data sets for AVAILABILITY, but

keep performance implications in mind.

Hooman Tarahom

Page 141: IMS concepts

RECON record types

• The relationship is never imbedded in the records like a direct pointer, but can be built by DBRC using the information registered in each record type. This allows constant access of the related records through their physical keys

• There are six general classes of RECON record types:1. Control records2. Log records3. Change accumulation records4. Database data set (DBDS) group records5. Subsystem records6. Database records

Hooman Tarahom

Page 142: IMS concepts

RECON header record

• The header is the first record registered in the RECON data set by the INIT.RECON command

• The header identifies the data set as a RECON data set and keeps information related to the whole DBRC system

• The RECON header extension record identifies the individual RECON data sets. It is also used in the synchronization process of the two primary RECON data sets. It is created by the INIT.RECON command, together with the RECON header record

Hooman Tarahom

Page 143: IMS concepts

DB record

• The Database (DB) record describes a database• There is one DB record in the RECON data set for each

database that has been registered to DBRC through the use of the INIT.DB command

• A DB record includes:– Name of the DBDS for the database– Share level specified for the database– Database status flags– Current authorization usage

• A DB record is symbolically related to:– The DBDS record for each database data set– The SUBSYS record for each subsystem currently authorized to

use the database.

Hooman Tarahom

Page 144: IMS concepts

DBDS record

• The Database Data Set (DBDS) record describes a database data set• There is a DBDS record in the RECON data set for each database data set

that has been defined to the DBRC using the INIT.DBDS command• The DBDS record includes:

– Data set name– DD name for the data set– DBD name of the database– Data set, database organization– Status flags for the data set– Information related to image copy or change accumulation– Name of the JCL member to be used for GENJCL.IC or GENJCL.RECOV.

• A DBDS record has the following relationship to other records:– DB record for the database to which the data set belongs– CAGRP record for the change accumulation group to which the database data

set belongs (when a change accumulation group has been defined)– ALLOC, IC, REORG, RECOV, AAUTH records.

Hooman Tarahom

Page 145: IMS concepts

SUBSYS record

• The Subsystem (SUBSYS) record informs DBRC that a subsystem is currently active

• A SUBSYS record is created any time a subsystem signs on to DBRCA• A SUBSYS record is deleted when:

– The subsystem terminates normally– The subsystem terminates abnormally, but without any database

updates– DBRC is notified of the successful completion of the subsystem

recovery process (IMS emergency restart or batch backout).

• The SUBSYS record includes:– ID of the subsystem– Start time of the log– Subsystem status flags– DBDS name for each database currently authorized to the subsystem.

Hooman Tarahom

Page 146: IMS concepts

PRILOG/SECLOG record

• The Primary Recovery Log (PRILOG) record or the Secondary Recovery Log (SECLOG) record, describes a log RLDS created by an IMS DC or CICS/OS/VS online system, a batch DL/I job, or the archive utility

• A PRILOG record is created, together with a LOGALL record, whenever a log is opened

• If the subsystem is an IMS batch job and dual log is in use, a SECLOG record is also created

• A PRILOG record is deleted in the following cases:– The command DELETE.LOG INACTIVE deletes all the log records no

longer needed for recovery purposes.– The command DELETE.LOG TOTIME deletes all the inactive log records

older than the specified time.– The command DELETE.LOG STARTIME deletes a particular log record.

Hooman Tarahom

Page 147: IMS concepts

PRISLDS/SECSLDS record

• The Primary System Log (PRISLDS) record or the Secondary System Log (SECSLDS) record describes a system log SLDS created by an IMS DC online system

• PRISLDS record is created, along with a LOGALL record, whenever a system log is opened. A SECSLDS record can be created at archive time

• A PRISLDS record is deleted in the following cases:– The command DELETE.LOG INACTIVE deletes all the log records

no longer needed for recovery purposes.– The command DELETE.LOG TOTIME deletes all the inactive log

records older than the specified time.– The command DELETE.LOG STARTIME deletes a particular log

record.

Hooman Tarahom

Page 148: IMS concepts

PRIOLD/SECOLD record

• The Primary OLDS (PRIOLD) record and the Secondary OLDS (SECOLD) record describe the IMS DC Online Data Sets (OLDS) defined for use.

• Whenever an OLDS is defined to IMS DC, the PRIOLD record is updated. If IMS dual logging is in use, the SECOLD record is also updated

Hooman Tarahom

Page 149: IMS concepts

IC record

• The Image Copy (IC) record describes an image copy output data set• This record can be created:

– Automatically, when the image copy utility is executed to create a standard image copy

– With the NOTIFY.IC command, when a standard image copy has been created with DBRC = NO

– With the NOTIFY.UIC command, when another nonstandard image copy has been created

– In advance, and reserved for future use with the INIT.IC command, when the related DBDS record has the REUSE option

– By the HISAM reload utility, which creates an IC record pointing to the unload data set if the REUSE option is not being used for the DBDS under reload

• This record is deleted when the maximum image copy generation count is exceeded and its time-stamp is beyond the recovery period

Hooman Tarahom

Page 150: IMS concepts

REORG record

• The Reorganization (REORG) record informs DBRC that a reorganization of a particular DBDS has taken place

• With this information, DBRC will not allow recovery operations beyond the time-stamp of this reorganization

• The REORG record is created when:– A HISAM or HDAM reload utility is successfully executed

– A prefix update utility is executed

• The REORG record is deleted when its creation time-stamp is older than the last IC associated with the database data set

Hooman Tarahom

Page 151: IMS concepts

RECOV record

• The Recovery (RECOV) record informs DBRC that the recovery of a particular DBDS has taken place

• With this information, DBRC knows when a time-stamp recovery has been performed

• The RECOV record is created when the IMS DB recovery utility is successfully executed

• A RECOV record is erased when its creation time-stamp is found to be older than the oldest IC record associated with the DBDS

Hooman Tarahom

Page 152: IMS concepts

AAUTH record

• The Authorization (AAUTH) record indicates the sharing status of a Fast Path Database Area

Hooman Tarahom

Page 153: IMS concepts

IMS logging

Hooman Tarahom

Page 154: IMS concepts

IMS logging

• The IMS logs are made up of a number of components:

– Log Buffers

– Online Log Data sets (OLDS)

– Write Ahead Data sets (WADS)

– System Log Data sets (SLDS)

– Recovery Log Data sets (RLDS)

Hooman Tarahom

Page 155: IMS concepts

IMS log buffers

• The log buffers are used for IMS to write any information required to be logged, without having to do any real I/O

• Whenever a log buffer is full, the complete log buffer is scheduled to be written out to the OLDS as a background, asynchronous task

• The OLDS buffers are used in such a manner as to keep available as long as possible the log records that may be needed for dynamic backout

• If a needed log record is no longer available in storage, one of the OLDS buffers will be used for reading the appropriate blocks from the OLDS

• The number of log buffers is an IMS start-up parameter, and the maximum is 999. The size of each log buffer is dependent on the actual blocksize of the physical OLDS

• The IMS log buffers now reside in extended private storage, however, there is a log buffer prefix that still exists in ECSA

Hooman Tarahom

Page 156: IMS concepts

Online log data sets (OLDS)

• The OLDS are the data sets which contain all the log records required for restart and recovery

• These data sets must be pre-allocated (but need not be pre-formatted) on DASD and will hold the log records until they are archived

• The OLDS is written by BSAM. OSAM is used to read the OLDS for dynamic backout

• The OLDS are made up of multiple data sets which are used in a wrap around manner

• At least 3 data sets must be allocated for the OLDS to allow IMS to start, while an upper limit of 100 is supported

• Only complete log buffers are written to the OLDS, to enhance performance

• Should any incomplete buffers need to be written out, the are written to the WADS. The only exceptions to this are at IMS shutdown, or in degraded logging mode, when the WADS are unavailable, then the WADS writes will be done to the OLDS

Hooman Tarahom

Page 157: IMS concepts

OLDS dual logging

• Dual logging can also be optionally implemented, with a primary and secondary data set for each defined OLDS

• A primary and secondary data set will be matched and, therefore, the pair should have the same space allocation

• Secondary extent allocation cannot be used

• All OLDS must have the same blocksize, and be a multiple of 2Kb (2048 bytes). the maximum allowable blocksize is 30kb

Hooman Tarahom

Page 158: IMS concepts

Backward recovery

• When IMS or an application program fails, you need to remove incorrect or unwanted changes from the database

• Backward recovery or backout allows you to remove these incorrect updates

• The three types of backout are:– Dynamic backout– Backout during emergency restart– Batch backout

• IMS automatically (dynamically) backs out database changes in an online environment when any of the following occurs:– An application program terminates abnormally– An application program issues a rollback call– An application program tries to access an unavailable database– A deadlock occurs

Hooman Tarahom

Page 159: IMS concepts

Archiving

• The current OLDS (both primary and secondary) is closed and the next OLDS is used whenever one of the following situations occurs– OLDS becomes full– I/O error occurs– MTO command is entered to force a log switch (such as /SWI OLDS)– MTO command is issued to close a database (such as /DBR DB)

without specifying the NOFEOV parameter.

• DBRC is automatically notified that a new OLDS is being used. When this occurs, IMS may automatically submit the archive job

• IMS can define whether the log archive process will occur with every log switch, or every second log switch, and the DBRC skeletal JCL that controls the archiving, can be defined to also create 1 or 2 System Log data sets, and 0, 1 or 2 Recovery Log Data sets

Hooman Tarahom

Page 160: IMS concepts

OLDS errors

• In the case of a write error, the subject OLDS (or pair of OLDS) will be put into a stopped status and will not be used again

• Information is kept in the RECON data set about the OLDS for each IMS system

• IMS issues messages when it is running out of OLDS• During the use of the last available OLDS, IMS will indicate

that no spare OLDS are available• When all the OLDS are full, and the archives have not

successfully completed, then IMS will stop, and have to wait until at least 1 OLDS has been archived. The only thing IMS will do is repeatedly issue messages to indicate that it is has run out of OLDS, and is waiting

Hooman Tarahom

Page 161: IMS concepts

Write ahead data sets (WADS)

• The WADS is a small direct access data set which contains a copy of committed log records which are in OLDS buffers, but have not yet been written to the OLDS

• If IMS or the system fails, the log data in the WADS is used to terminate the OLDS, which can be done as part of an Emergency Restart, or as an option on the IMS Log Recovery Utility

• The WADS space is continually reused after the appropriate log data has been written to the OLDS• This data set is required for all IMS systems, and must be pre-allocated and formatted at IMS start-

up when first used• In addition, the WADS provide extremely high performance. This is achieved primarily through the

physical design of the WADS• All WADS should be dynamically allocated by using the DFSMDA macro, and not hardcoded in the

control region JCL• All the WADS must be on the same device type and have the same space allocation• Regardless of whether there are single or dual WADS, there can be up to 10 WADS defined to any

IMS. (WADS0, WADS1,...., WADS9).• WADS0 (and WADS1 if running dual WADS) are active, and the rest remain as spares in case any

active WADS has an I/O error. The next spare will then replace the one with the error• Recommendation: To eliminate potential resource contention, place the WADS on a low-use device

that is different from the device you use for the OLDS.

Hooman Tarahom

Page 162: IMS concepts

System log data sets (SLDS)

• The SLDS is created by the IMS log archive utility, possibly after every OLDS switch

• The SLDS can contain the data from one or more OLDS data sets• Information about SLDS is maintained by DBRC in the RECON data

set• Dual archiving to 2 SLDS data sets (primary and secondary) is

supported• Generally, the SLDS should contain all the log records from the

OLDS, but if the user wants to omit types of log records from the SLDS, these can be specified within the log archive utility

• The blocksize of the SLDS is independent of the OLDS blocksize

Hooman Tarahom

Page 163: IMS concepts

Recovery log data sets (RLDS)

• When the IMS log archive utility is run, the user can request creation of an output data set that contains all of the log records needed for database recovery

• All database recoveries and change accumulation jobs will always use the RLDS if one exists, and this can considerably speed up any of these processes because the only contents of these data sets are database recovery log records

• The RLDS is optional, and you can also have dual copies of this, in a similar way to the SLDS

Hooman Tarahom

Page 164: IMS concepts

Overview of the logging process

Hooman Tarahom

Page 165: IMS concepts

IMS system generation process

Hooman Tarahom

Page 166: IMS concepts

IMS system generation process

• The IMS system generation process is used to build the IMS system, at installation time, as well as at maintenance upgrade, as well as when standard application definition changes are required

Hooman Tarahom

Page 167: IMS concepts

The IMS generation process

Hooman Tarahom

Page 168: IMS concepts

Types of IMS generation

Hooman Tarahom

Page 169: IMS concepts

Types of IMS generation(Cont.)

• After your initial system definition, usually with an ALL gen, the ON-LINE, CTLBLKS, and NUCLEUS types of generation are used to implement most changes. These generations require a cold start of the IMS online system to take effect

• However, for certain modifications and additions, you can take advantage of the online change method using the MODBLKS generation. The changes are made active during the execution of the online system and do not require a restart operation

Hooman Tarahom

Page 170: IMS concepts

IMS generation macros: APPLCTN

• The APPLCTN macro allows you to define the program resource requirements for application programs that run under the control of the IMS DB/DC environment, as well as for applications that access databases through DBCTL

Hooman Tarahom

Page 171: IMS concepts

Hooman Tarahom

Page 172: IMS concepts

Hooman Tarahom

Page 173: IMS concepts

Hooman Tarahom

Page 174: IMS concepts

IMS generation macros: BUFPOOLS

• The BUFPOOLS macro statement is used to specify default storage buffer pool sizes for the DB/DC and DBCTL environments. The sizes specified are used unless otherwise expressly stated for that buffer or pool at control program execution time for an online system.

Hooman Tarahom

Page 175: IMS concepts

Hooman Tarahom

Page 176: IMS concepts

IMS generation macros: COMM

• The COMM macro is used to specify generalcommunication requirements that are not associated with any particular terminal type. COMM is always required for terminal types supported by VTAM. It is optional for BTAM, BSAM, GAM, and ARAM terminal types. It can also be required to specify additional system options, such as support for MFS on the master terminal

Hooman Tarahom

Page 177: IMS concepts

Hooman Tarahom

Page 178: IMS concepts

IMS generation macros: DATABASE

• The DATABASE macro statement is used to define the set of physical databases that IMS is to manage. One DATABASE macro instruction must be specified for each HSAM, HISAM, and HDAM database

• Two DATABASE macro instructions are required for a HIDAM database: one for the INDEX DBD and one for the HIDAM DBD

• One DATABASE macro instruction must be included for each secondary index database that refers to any database defined to the online system.

• For Fast Path, a DATABASE macro statement must be included for each Main Storage Database (MSDB) and Data Entry Database (DEDB) to be processed

Hooman Tarahom

Page 179: IMS concepts

Hooman Tarahom

Page 180: IMS concepts

Hooman Tarahom

Page 181: IMS concepts

IMS generation macros: IMSCTF

• The IMSCTF macro statement defines parameters to IMS, and to the DBCTL environment

Hooman Tarahom

Page 182: IMS concepts

Hooman Tarahom

Page 183: IMS concepts

IMS generation macros: IMSCTRL

• The IMSCTRL macro statement describes the basic IMS control program options, the MVS system configuration under which IMS is to execute, and the type of IMS system definition to be performed. The IMSCTRL macro instruction must be the first statement of the system definition control statements.

Hooman Tarahom

Page 184: IMS concepts

Hooman Tarahom

Page 185: IMS concepts

IMS generation macros: IMSGEN

• IMSGEN specifies the assembler and linkage editor data sets and options, and the system definition output options and features. The IMSGEN must be the last IMS system definition macro, and it must be followed by an assembler END statement.

Hooman Tarahom

Page 186: IMS concepts

Hooman Tarahom

Page 187: IMS concepts

Hooman Tarahom

Page 188: IMS concepts

IMS generation macros: SECURITY

• The SECURITY macro statement lets you specify optional security features to be in effect during IMS execution unless they are overridden during system initialization.

Hooman Tarahom

Page 189: IMS concepts

Hooman Tarahom

Page 190: IMS concepts

The IMS generation process

Hooman Tarahom

Page 191: IMS concepts

Stage1

• The IMS Stage1 job is a simple assembler step, with the SYSIN being the IMS macros

• The output of the IMS Stage1 includes:

– Standard assembler listing output with any appropriate error messages.

– IMS Stage2 input JCL, also for use as JCLIN.

Hooman Tarahom

Page 192: IMS concepts

Hooman Tarahom

Page 193: IMS concepts

Stage2

• The output of the Stage1 is then used as the JCL to run the Stage2 gen.• Depending on the Stage1 definitions within the IMSGEN macro, the Stage2

can be divided up into a single job with many steps, or many jobs with fewer steps. This is all dependant on how your site prefers to run this process

• The Stage2 will do all the module assembling and linking as required to build all the necessary load modules, depending on what type of gen is being run

• In the case of an ALL gen, it is advisable to empty all target libraries first (RESLIB, MACLIB, MODBLKS, MATRIX), to ensure all modules generated are valid, and none are left lying around from previous options no longer in use.

• An ALL gen will assemble/link almost all modules, whereas a MODBLKS gen will only assemble/link those modules required to define the programs, transactions databases

Hooman Tarahom

Page 194: IMS concepts

Stage2(Cont.)

• The output of the IMS gen includes:– Executable load modules in datasets RESLIB,

MODBLKS.

– IMS Options definitions in dataset OPTIONS.

– Assembled object code for use in later gens in datasets OBJDSET.

– Optionally create the runtime MACLIB dataset

– Optionally create the runtime PROCLIB dataset.

– Optionally create the runtime IMS default MFS screens in datasets FORMAT, TFORMAT, REFERAL.

Hooman Tarahom

Page 195: IMS concepts

Stage2: MACLIB update

• A parameter in the IMS Stage1 macro IMSGEN (MACLIB=UTILITY/ALL) determines to what level the MACLIB dataset will be populated if the gen type is anything other than CTLBLKS or NUCLEUS. These two options provide:

• UTILITY: Populates MACLIB with only those macros necessary for IMS developers or user generations, such as PSB generation, DBD generation or dynamic allocation generation.

• ALL: Populates MACLIB with all IMS macros, except those necessary for an IMS system generation, and hence, not required by IMS developers or users.

Hooman Tarahom

Page 196: IMS concepts

Stage2: PROCLIB update

• A parameter in the IMS Stage1 macro IMSGEN (PROCLIB=YES/NO) determines whether or not the PROCLIB dataset is to be populated by this gen, or not. The PROCLIB contains all IMS started task and JCL procedures, as well as the IMS PROCLIB members required by IMS and IMS utilities to provide startup options.

Hooman Tarahom

Page 197: IMS concepts

JCLIN

• JCL is a an SMP/E process, that tell SMP/E how to assemble and link any module.

• JCLIN should be run following any IMS gen, to ensure that SMP/E is always kept informed on any parameter changes in the IMS generation.

Hooman Tarahom

Page 198: IMS concepts

Hooman Tarahom

Page 199: IMS concepts

Hooman Tarahom

Page 200: IMS concepts

Hooman Tarahom

Page 201: IMS concepts

IMS security maintenance utility generation

• For security beyond that provided by default terminal security, you can use the various security options specified with the Security Maintenance utility (SMU).

• The utility is executed offline after completion of IMS Stage2 processing for system definition

• Its output is a set of secured-resource tables placed on the MATRIX dataset

• The tables are loaded at system initialization time, and, for certain options, work with exit routines and/or RACF during online execution to provide resource protection

• The IMS Security gen must ALWAYS be run after any IMSGEN

Hooman Tarahom

Page 202: IMS concepts

Automating the IMS system generation process

• IMS is shipped to customers with the Installation Verification Procedure (IVP), which will help tailor the initial IMS system, with all provided JCL and sample input, including the IMS system generation jobs

• Depending on how often your site requires changes to the IMS Stage1, will determine how often you need to run the IMS gen

• Many customers have put together elaborate or simple means of automating all the steps necessary to run an IMS gen, and maintain their various IMS systems.

Hooman Tarahom

Page 203: IMS concepts

IMS security overview

• When IMS was developed, security products like the Resource Access Control Facility (RACF), had not been developed, or were not in use by most installations

• It was common during this period to have each subsystem implement its own security.

• These internal IMS security facilities (for example, Security Maintenance Utility or SMU) are still available for protecting many IMS resource types and are used by some IMS installations today.

Hooman Tarahom

Page 204: IMS concepts

DBCTL security

• You can make IMS security choices in two system definition macros: SECURITY and IMSGEN. You can use the Resource Access Control Facility (RACF) to implement security decisions.

• IMS™ Version 12 is the last version of IMS to support the SECURITY macro.

Hooman Tarahom

Page 205: IMS concepts

Starting/Restarting the System

Hooman Tarahom

Page 206: IMS concepts

Checkpoint Freeze/Purge-DBCTL

Hooman Tarahom

Page 207: IMS concepts

Checkpoint Freeze/Purge-DBCTL(Cont.)

Hooman Tarahom

Page 208: IMS concepts

Checkpoint Freeze/Purge-DBCTL(Cont.)

Hooman Tarahom

Page 209: IMS concepts

Checkpoint Freeze/Purge-DBCTL(Cont.)

Hooman Tarahom

Page 210: IMS concepts

Checkpoint Freeze/Purge-DBCTL(Cont.)

For /CHE FREEZE, BMPs will not terminate until a sync point is reached. For /CHE PURGE, the BMPs wait until EOJ. BMPs may be canceled with a /STOP REGION ABDUMP command.

In the DBCTL environment, be aware that a CCTL cannot terminate until all of its users have reached a sync point. /STOP REGION ABDUMP can cancel these users if necessary.

Hooman Tarahom

Page 211: IMS concepts

DBCTL (re)starts

• Cold(/NRE CHECKPOINT 0)

• WARM(/NRE)

• EMERGENCY(/ERE)

Hooman Tarahom

Page 212: IMS concepts

Cold Start-DBCTL

Hooman Tarahom

Page 213: IMS concepts

Cold Start-DBCTL(Cont.)

Hooman Tarahom

Page 214: IMS concepts

Warm Start-DBCTL

Hooman Tarahom

Page 215: IMS concepts

Warm Start-DBCTL(Cont.)

Hooman Tarahom

Page 216: IMS concepts

Warm Start-DBCTL(Cont.)

Hooman Tarahom

Page 217: IMS concepts

Emergency Restart-DBCTL

Hooman Tarahom

Page 218: IMS concepts

Emergency Restart-DBCTL(Cont.)

Hooman Tarahom

Page 219: IMS concepts

Emergency Restart-DBCTL(Cont.)

Hooman Tarahom

Page 220: IMS concepts

Emergency Restart-DBCTL(Cont.)

The database must be recovered and batch backout executed before startingthe database and program

Hooman Tarahom

Page 221: IMS concepts

Emergency Restart-DBCTL(Cont.)

If additional OLDSs are available, IMS automatically switches to the nextavailable OLDS, and no intervention is required

If, however, the write erroroccurs on the last available OLDS in a single logging environment, IMS abends with a U0616. U0616 does not occur if the logging state is dual degraded mode, in which caselogging is being done on a single data set

Hooman Tarahom

Page 222: IMS concepts

Emergency Restart-DBCTL(Cont.)

Hooman Tarahom

Page 223: IMS concepts

CCTL Connect to DBCTL

This in-doubt UOW was passed to the CCTL in the in-doubt list upon connection. CCTL had no knowledge of this UOW so it issued a RESYNC request of the type UNknown or COLDstart DBCTL displays this message when such a request is received. The result is that this in-doubt UOW is not resolved.Talk to the system administrator of the CCTL to determine if this in-doubt UOW should be resolved. If so, the only way to resolve it is with the /CHA CCTLINDOUBT command.

Hooman Tarahom

Page 224: IMS concepts

/NRESTART command

• CHECKPOINT :Identifies the shutdown/restart sequence. CHECKPOINT 0 must be specified for a cold start.

• FORMAT :Specifies which queues or data sets should be formatted as part of the restart process– RS: Restart data set– WA: Write-ahead data set– FORMAT ALL is only required at IMS initialization (first time use

of the system).

• Attention: A cold start performed after a processing failure could cause processing against uncommitted data. To ensure data integrity, be sure necessary backout or recovery operations have been performed before restarting.

Hooman Tarahom

Page 225: IMS concepts

/ERESTART command

• Three conditions that result in the need for an emergency restart are:– Abnormal termination of IMS– abnormal termination of z/OS– Forced termination of IMS using the z/OS MODIFY command

• CHECKPOINT• COLDBASE :Indicates a cold start of the database component, while performing an emergency restart of the

communications component.– If this keyword is used, the user is responsible or the recovery of the databases. The Fast Path areas will not be redone and no

backouts of inflight DL/I databases will be performed. If in-doubts exist, a batch backout run with the cold start option will backout inflight DL/I data. This will place both DL/I and Fast Path data in the aborted state.

– If this keyword is not used, the database component will be warm started.

• COLDSYS :Indicates a cold start of both the database and the data communication components. An /ERE COLDSYS command differs from a /NRE CHECKPOINT 0 command in function

• FORMAT :Specifies which queues or data sets should be formatted as part of the restart process(ALL, RS, WA).• NOBMP :Specifies no backout of BMP updates occurs and all affected databases and programs are stopped. If

NOBMP is not specified, all updates made subsequent to the last commit point invoked by the active BMP• OVERRIDE :Is required only to restart the system after failure of power, machine, z/OS, or DBRC where IMS

abnormal termination was unable to mark the DBRC subsystem record in RECON as abnormally terminated. IMS emergency restart will abort with message DFS0618A when DBRC indicates that the subsystem is currently active and that neither the OVERRIDE keyword nor the BACKUP keyword is present on the /ERESTART command .– Attention: Use of the OVERRIDE keyword on a currently running IMS system can lead to database and system integrity

problems. programs are backed out of the database as part of the restart process.

Hooman Tarahom

Page 226: IMS concepts

References

• IMS Primer

• IMS System Administration

• IMS System Definitions

Hooman Tarahom

Page 227: IMS concepts

Thanks!

Hooman Tarahom