Summary - Ims Logging - 10
Summary - Ims Logging - 10
content here.
This notes contains portions of IMS Manuals (which are available to general public at
no cost in the form of Free Internet downloads) which are either stated verbatim or
modified to make those specific portions more understandable.
Intent to Upload this document is to be seen in the best of interest of anyone who
wants to learn the Practice of IMS Database Administration
Although referred to as a data set, the OLDS is actually made up of multiple data sets that
wrap around, one to the other. You must allocate at least three, but no more than 100, data
sets for the OLDS.
OLDS & Access Methods > IMS uses the Basic Sequential Access Method (BSAM) to write
log records to the OLDS, and the Overflow Sequential Access Method (OSAM) to read the
OLDS when IMS performs dynamic backout.
Dual Logging
With dual logging, there would be a Primary and Secondary OLDS, in place of one OLDS.
IMS writes to both and does duplication essentially.
When you use dual logging, an I/O error on either the primary or secondary data set causes
IMS to close the non error OLDS and mark the error OLDS in the Recovery Control
(RECON) data set as having an I/O error and a CLOSE error. IMS then continues logging
with the next available pair. For dual logging, the minimum number of data sets is three
pairs, and the maximum number is 100 pairs.
OLDS Operational issues
when IMS understands that it is using the last available OLDS, it alerts the MTO that no
additional OLDS is available. By the time this LAST OLDS too fills up and if archiving has
not happened (making up at least 1 OLDS), IMS waits until OLDS space becomes available.
IMS will not log to an OLDS containing active data i.e. the data that is not archived yet. You
must run the Log Archive utility to free the OLDS space. After IMS uses the last allocated
OLDS, it reuses the first OLDS, if it has been archived.
OLDS data Archiving jobs might not complete in the order in which OLDSs were created.
For example, an older OLDS may not have been archived yet, but an younger OLDS may
have. When this occurs, IMS issues message DFS3259I and uses the next available OLDS.
DFS3259I ONLINE LOG DATA SET NOT YET ARCHIVED FOR ddname
Explanation: The online log data set (OLDS), identified by ddname, would normally have been archived and be ready
for reuse. However, it was not archived.
System Action: IMS will use another available OLDS and continue processing.
System Operator Response: If an archive job for the specified log data set is not executing, submit an archive
job.
Programmer Response: None.
When all the OLDS are full, and the archives have not successfully completed, then IMS will
stop, and have to wait until at least 1 OLDS has been archived. The only thing IMS will do is
repeatedly issue message to indicate that it is has run out of OLDS space, and waits.
/STOP OLDS command stops and dynamically deallocates an OLDS. When stopped, that
OLDS is no longer involved in the wrap around process.
Recommendation: Stop an OLDS when an error occurs that REQUIRES a OLDS RECOVERY
Restriction: You cannot stop any OLDS when two or fewer OLDSs are currently available.
/START OLDS command starts and dynamically allocates an OLDS.
IMS retains the status of an OLDS (in-use, stopped, and so on) from one restart to the
next.
D:\Documents and
Settings\sz5292\Desktop\Slash DIS OLDS command.doc
The Archival process (or copying OLDS into SLDS) allows skipping some types of log records
(using NOLOG keyword). However the SLDS must always contain those records needed
for Database recovery, Batch Backout, IMS restart.
IMS dynamically allocates an SLDS during IMS restart whenever log data required for
restart read processing is not available from an OLDS. The OLDS might be unavailable
because it has been archived, and because one of the following is true:
• The OLDS has been reused.
• The PRIOLDS and SECOLDS records have been deleted from the RECON data set.
To allow IMS to dynamically allocate SLDSs, you must specify the SLDS device type
through Dynamic Allocation macro (DFSMDA). DBRC provides the data set name and volume
information required for dynamic allocation.
Know more on
PRILOG & SECLOG PRISLD & SECSLD PRIOLDS & SECOLDS
Using RLDS is more efficient than SLDS, and it is for this reason DBRC tries creating JCL
using RLDS (when possible) in the contexts of Database Recovery or Change Accumulation.
NOTE: During each Check point IMS creates or updates a Checkpoint ID table. IMS uses
this table during Restart to determine from which Checkpoint to restart the system.
For a good presentation /discussion on how log files (OLDS/SLDS/WADS) are defined,
configured and life cycled, see section “Specifying your choices for the log datasets“ in
page 91/453 of IMS V9 Operations Guide.
RECONs
IMS uses the RECON data set in many situations:
• During warm start, and normal or emergency restarts, IMS consults the RECONs to
determine if it is OLDS or SLDS that contains the most recent log data for each
DBDS registered to DBRC.
• The RECON always shows current status of OLDS, and whether the OLDS has been
archived.
• For a Recovery utility, DBRC selects the correct Log data sets from RECONs
Z/OS Log Data set / CQS system checkpoint data set / CQS Structure Recovery Data set
-- Refer to page 85/86 or IMSV9 Operations guide.
Automatic Archiving
By default, IMS archives each OLDS when it is full. However we can control Archival
frequency by specifying to IMS how many OLDSs must get full before IMS can commence
the Archival.
Manual Archiving
If it is sufficient to archive infrequently or irregularly, we can use the same Log Archival
Utility (DFSUARC0) but manually. But we need to monitor IMS logging (i.e. the status of
OLDS). Execute the Utility through an User Deck or Use GENJCL.ARCHIVE or /RMGENJCL
to create the JCL that can Archive.
Archiving is also useful for batch systems to free disk space if your SLDSs are on disk. Use
the Log Archive utility to copy an SLDS from DASD to tape.
Notes:
The Log Archive Utility can create an RLDS or a User Data set. The Utility’s COPY control
statement can receive the specifications as to in to what data sets and what Log records
need to be copied to. For instance, RLDS creation involves copying only those records that
are required for a Database recovery.
The Log Archive Utility can be used to copy an SLDS or RLDS to a new data set. But the
JCL it takes need to be setup manually as GENJCL.ARCHIVE doesn’t generate the required
JCL. It can only generate the deck that is used in an Archival context.
Customizing Archiving
You can write user exit routines to Log Archive Utility in order to extract and copy certain
types of log records to user data sets. For example, you can copy the Log records required
for restarting BMPs, to a user data set.
To customize archiving, specify the entry points for the exit routine using Log Archive
utility control statements. IMS gives control to the exit routines when:
• The Log Archive utility is initialized.
• IMS reads the OLDS.
• The Log Archive utility terminates.
Using DBRC to Track Batch Job Logs
IMS online subsystem always uses DBRC for tracking logs. For Batch jobs it is optional.
If you use DBRC for batch jobs, DBRC tracks what batch jobs created which SLDS.
Recommendation: Use DBRC for batch jobs to eliminate the need to manually keep track of
batch SLDSs.
You do not need to create a log for read only (PROCOPT=G) batch jobs, but you do need to
create a log for update jobs. For update jobs using DBRC, you cannot use DD NULLFILE or
DD DUMMY in the JCL for the log data set.
Specify the use of DBRC during IMS system definition by using the DBRC keyword in the
IMSCTRL macro. While IMS is running, you can use the DBRC= execution parameter in the
DBBBATCH and DLIBATCH procedures to override the value specified during system
definition. If you specify the FORCE keyword during system definition, you must use DBRC,
except when you run the Log Archive (batch only), Log Recovery, or Batch Backout utilities.
DBRC keeps track of all log data sets for those data bases defined to DBRC. If you tell
DBRC to generate the Job Control Language (JCL) to recover a data base, it gathers the
last image copy and only those log data sets required for recovery.
Note: DBRC does not verify the retention period of the DL/I log tapes. Nor does it verify
that the DL/I DASD logs are written to permanent DASD. Therefore, it is possible that a
recovery may fail with the DL/I log not found.
DBRC checks the program specification block (PSB) processing options (PROCOPT) to
determine if a log data set is required. If it has an update PROCOPT, IMS does not allow
the batch job to run if it has not specified an IEFRDER DD statement. IEFRDER must
contain a valid DSN, not a dummy. When the log is first opened, DBRC creates a PRILOG
record with the DSN and vol ser of the log. As the first update is made to each data base,
the LOGALL record is updated with the DBD and data definition (DD). This is how DBRC
knows what DL/I logs need to be included in a data base recovery.
DBRC tracks the use of the OLDS. When the currently allocated OLDS is filled, it swaps
over to the next available OLDS. When they swap, DBRC generates and submits a job that
copies the OLDS to a SLDS. DBRC records the creation of the SLDS along with its DSN,
vol ser, and time stamp.
CHANGE ACCUMULATION (Utility DFSUCUM0)
What it is?
Change Accum is a compact dataset created from of one or more IMS log data sets. This
Utility
- Extracts records related to Recovery only
- Consolidates Segments’ changes into one final change for each segment which got updated
Recovering a Database from either SLDS or even with RLDS per se is difficult (or
inefficient at best) for reasons like,
1. SLDS - contains a record of the System wide activities, and being so it contains a
lot more information which is irrelevant from a recovery standpoint.
2. RLDS - though its data is recovery oriented, it is so but for several DBs, including
the ones NOT needing the recovery. So, an excess again!!
Also, since either the SLDS or the RLDS stores database changes (Whole of the Changed
database record I GUESS) in the chronological order, Application would be interested only
in the version of the database record that is closest to the time the dataset is lost.
For example if a specific database record receives changes, say 100 times (Any segment
occurrence anywhere in this record), yet during recovery, all the other previous versions
(i.e. 99 out of the 100, per the example) are irrelevant.
The Change Accumulation Utility helps a database recovery by offering functions like
1. Eliminating all non database change records
2. Specifying a purge date(s) to eliminate all database records before that date
3. Sorting the acceptable database change records
4. Combining all database change records that update the same database physical record
5. Finds the most recent change in each part of an individual record
6. Saves only the most recent state of each changed part of each data set
7. Merges updates from different subsystems
AND by
8. Using the Change Accumulation Utility periodically can speed up the database recovery
time.
Alternatively, you can run the Database Change Accumulation utility only when the need for
recovery arises (just before running the Database Recovery utility). Running these two
utilities instead of just the Database Recovery utility can reduce the total time needed for
recovery, depending on how much unaccumulated log information exists.
A RECON recorded image copy of the specified database data set is usually a valid starting
point for change accumulation records.
You can run the Change Accumulation utility with a valid log subset (?) at any time to reduce
data to a minimum.
When the most recent image copy is used as input to the Change Accumulation utility
and that image copy is a concurrent image copy, changes already made to the database by
active applications might be missing from the copy because the changes might not have been
physically written to the data set. These changes, however, have been written to the log. In
this case, it is necessary to go back to some earlier point in the logs to ensure that all
changes are applied. How far to go back depends on the type of database and which image
copy utility was used.
The point-in-time selected to start the Change Accumulation utility is called the purge time.
To realize the REUSE of a CA dataset, REUSE must be specified (by way of INIT.CAGRP)
and the Change Accumulation JCL need to be produced through GENJCL.CA. When all the
available CA datasets in the Pool are used, and the GRPMAX limit is hit, the next run of the
CA Utility will use the CA dataset in that pool containing the oldest change records. What is
really reused is not just the name of the pool dataset, but it’s volume and physical space as
well, as if it is an empty CA dataset but belonging to the pool !!