Mainframe Interview Questions & Answers
Mainframe Interview Questions & Answers
I-O mode,
which stands for input-output. This mode allows you to read, write, and rewrite records in a file.
Here’s a simple example of how you might set up the file control and open the file in COBOL:
SELECT OPTIONAL your-file ASSIGN TO 'yourfile.dat'
ORGANIZATION IS SEQUENTIAL
ACCESS MODE IS SEQUENTIAL
FILE STATUS IS WS-FILE-STATUS.
ENVIRONMENT DIVISION.
OUTPUT-PRINT-FILE SECTION.
FILE-CONTROL.
SELECT JCL-OUTPUT ASSIGN TO INTRDR
ORGANIZATION IS LINE SEQUENTIAL.
DATA DIVISION.
FILE SECTION.
FD JCL-OUTPUT.
01 JCL-REC PIC X(80).
PROCEDURE DIVISION.
OPEN OUTPUT JCL-OUTPUT.
MOVE ' //JOBNAME JOB (ACCT),''NAME''' TO JCL-REC.
WRITE JCL-REC.
MOVE ' //STEP001 EXEC PGM=IEFBR14' TO JCL-REC.
WRITE JCL-REC.
MOVE ' //DD1 DD DSN=...,DISP=(NEW,CATLG,DELETE),' TO JCL-REC.
WRITE JCL-REC.
MOVE ' // SPACE=(TRK,(1,1),RLSE),' TO JCL-REC.
WRITE JCL-REC.
MOVE ' // UNIT=SYSDA' TO JCL-REC.
WRITE JCL-REC.
CLOSE JCL-OUTPUT.
STOP RUN.
Key Points:
• The ASSIGN TO INTRDR directive in the SELECT statement is critical as it directs the output to
the internal reader instead of a regular file.
• Each JCL command is moved into the JCL-REC field and then written to the JCL-OUTPUT file
sequentially.
• Proper JCL syntax and structure must be maintained within the COBOL program to ensure that
the job is submitted correctly.
This example assumes that your mainframe environment and JES setup allow for this kind of operation,
which is common in many mainframe environments. Always check with your system administrator for
specific configurations and permissions.
*********************************************************************************************************
Job Control Language (JCL) parameters play a crucial role in defining how data sets are handled during
the execution of jobs on IBM mainframes. Here are details about some of the commonly used JCL
parameters:
1. DISP (Disposition)
The DISP parameter specifies the status of a dataset before, during, and after the job step execution. It has
three fields: (status, normal-disposition, abnormal-disposition).
• Status: Can be NEW (create new dataset), OLD (existing dataset), MOD (modify existing dataset),
or SHR (shared access to the dataset).
• Normal-disposition: Specifies what happens to the dataset if the step completes normally.
Options include KEEP, CATLG (catalog the dataset), PASS (pass the dataset to the next job step),
and DELETE.
• Abnormal-disposition: Specifies what happens if the step terminates abnormally. Options are
similar to those for normal disposition.
2. SPACE
The SPACE parameter defines the amount of space to allocate for a dataset. It’s specified in three
subparameters: (unit, primary, secondary, directory-blocks).
• Unit: Specifies the unit of measurement, such as CYL (cylinders) or TRK (tracks).
• Primary: The primary amount of space to allocate.
• RECFM: Record format (e.g., FB for fixed block, VB for variable block).
• LRECL: Logical record length.
• BLKSIZE: Block size.
5. DSN (Data Set Name)
The DSN parameter specifies the name of the dataset.
• Example: DSN=MY.DATA.SET
6. DDNAME
The DDNAME or data definition name is the name assigned to a dataset within a JCL job to reference the
dataset in program code.
7. VOL (Volume)
The VOL parameter specifies the volume serial number where the dataset resides.
• Example: VOL=SER=123456
8. LRECL (Logical Record Length)
Specifies the length of each record.
9. RECFM (Record Format)
Defines the format of the records in the data set (e.g., FB for Fixed Block, VB for Variable Block).
This overview covers some of the essential JCL parameters. Each parameter must be correctly specified to
ensure the proper handling of datasets during job execution on an IBM mainframe.
*********************************************************************************************************
Failing to close a cursor in a programming context, particularly in database operations, can lead to several
potential issues. Here are some of the consequences:
1. Resource Leakage: Cursors typically consume database resources. Leaving a cursor open
consumes memory and may hold other resources, such as locks on rows or tables depending on
the database and cursor type. Over time, this can lead to resource exhaustion, where the database
runs out of memory or reaches a limit on the number of open cursors it can manage.
2. Database Performance Degradation: Open cursors that are not closed can hold locks for longer
than necessary, leading to increased lock contention. Other transactions requiring locks on the
same rows or tables might have to wait, leading to slower performance and increased wait times
across the database.
3. Limits on Cursor Usage: Many databases have a limit on the number of cursors that can be open
at any time. If your application repeatedly opens cursors and does not close them, it may reach
this limit, and subsequent attempts to open new cursors could fail with an error.
4. Impact on Transaction Management: Cursors that are associated with specific transactions
may prevent those transactions from closing properly. This can lead to transactions being left
open, consuming resources and potentially interfering with the operations of other transactions.
5. Memory Usage: Each open cursor consumes some amount of memory. If a large number of
cursors are left open, this can lead to excessive memory usage, which might affect the overall
performance of the application and the underlying database server.
6. Potential Errors or Crashes: In some systems, particularly in environments with strict resource
constraints, leaving cursors open can lead to errors or application crashes due to resource
depletion.
To avoid these issues, it is a best practice to always close cursors as soon as they are no longer needed.
This is typically done in a finally block or similar exception handling structure to ensure that the cursor is
closed even if an error occurs during cursor use:
OPEN cursor_name;
-- Perform operations using the cursor
CLOSE cursor_name;
Ensuring cursors are properly closed is essential for maintaining the health and efficiency of both the
application and the database system.
In COBOL and PL/I, particularly in mainframe environments, the management of cursors, especially
related to database operations, requires explicit actions by the developer to ensure that resources are
properly handled. These languages do not automatically manage resources such as cursors in the same
way that more modern, higher-level languages do with constructs like garbage collection or context
managers. Here's how cursor management typically works in COBOL and PL/I:
COBOL
1. Opening and Closing Cursors: In COBOL, you explicitly declare, open, fetch from, and close
cursors. Cursors are associated with SQL statements prepared for interacting with databases. It’s
crucial to close each cursor explicitly when it's no longer needed to free up resources.
2. Example:
EXEC SQL DECLARE c1 CURSOR FOR SELECT * FROM EMPLOYEE END-EXEC.
EXEC SQL OPEN c1 END-EXEC.
PERFORM UNTIL SQLCODE <> 0
EXEC SQL FETCH c1 INTO :EMPLOYEE-RECORD END-EXEC
PERFORM PROCESS-RECORD
END-PERFORM
EXEC SQL CLOSE c1 END-EXEC.
3. End of Program: If a COBOL program ends without explicitly closing its cursors, whether they
are closed or not depends on the DBMS and the runtime environment. It's unsafe to rely on the
environment to close cursors; always close them explicitly.
PL/I
1. Cursor Management: Like COBOL, PL/I requires explicit management of database cursors.
Cursors must be declared, opened, fetched from, and closed. The responsibility to close the cursor
lies with the programmer.
2. Example:
DCL c1 CURSOR FOR SELECT * FROM EMPLOYEE;
OPEN c1;
DO WHILE(SQLCODE = 0);
FETCH c1 INTO :EMPLOYEE_RECORD;
CALL PROCESS_RECORD(EMPLOYEE_RECORD);
END;
CLOSE c1;
1. Program Termination: Like COBOL, in PL/I, if a program terminates and leaves cursors open,
the underlying DBMS may eventually clean up these resources, but this cleanup might not be
immediate. Depending on the system's configuration, this could potentially lead to resource
locking issues or memory leaks.
General Advice
• Explicit Closure: Always explicitly close cursors in both COBOL and PL/I. This practice avoids
reliance on system behaviour and prevents resource leakage.
• Error Handling: Ensure that cursors are closed in error handling routines. If an error occurs
before the normal closure of a cursor, include cleanup logic in your error handling.
• Resource Checks: Regularly check and tune the DBMS settings for maximum allowed open
cursors and other related resource limits to avoid running into system limits.
In summary, both COBOL and PL/I require disciplined resource management practices. Since neither
language provides automatic resource management features like those found in newer languages, diligent,
explicit management of resources such as cursors is essential to maintaining application reliability and
performance.
*********************************************************************************************************
To control the sequence of job execution in JCL (Job Control Language) on IBM mainframes, you can use
job scheduling tools or JCL itself to manage job dependencies. Here are two primary ways to ensure that
JCL2 runs first, followed by JCL1, and then JCL3:
1. Using JES2 or JES3 Job Dependency Features
IBM's Job Entry Subsystems (JES2 and JES3) provide features to manage job dependencies directly
through JCL. You can use the JOBCAT or JOBPRT parameters in JES3 or the AFTER parameter in JES2 to
specify job sequencing.
Example using JES2 AFTER parameter:
For JES2, you might specify dependencies directly in the JCL with the AFTER keyword in the JOB
statement. This tells the system to run the job after the completion of another job.
//JCL2 JOB (ACCT),'RUN JCL2'
//STEP1 EXEC PGM=...
...
//JCL1 JOB (ACCT),'RUN JCL1',AFTER=JCL2
//STEP1 EXEC PGM=...
...
//JCL3 JOB (ACCT),'RUN JCL3',AFTER=JCL1
//STEP1 EXEC PGM=...
...
In this setup, JCL1 will only start after JCL2 has completed, and JCL3 will start after JCL1 has completed.
2. Using External Job Scheduling Tools
In many production environments, external job schedulers like IBM Tivoli Workload Scheduler (TWS), CA-
7, or Control-M are used. These tools provide robust capabilities to manage job dependencies, schedules,
and resources more efficiently than using JCL alone.
Example using a job scheduler:
You would define the jobs and their run sequence in the scheduler’s configuration. Each job is set to be
triggered after the successful completion of its predecessor.
Define Job JCL2
Define Job JCL1 with dependency on JCL2
Define Job JCL3 with dependency on JCL1
The scheduler handles the execution flow based on these dependencies.
Choosing a Method
• Complexity and Scale: If your environment involves complex job dependencies or large batches
of jobs, using a job scheduler is advisable. It offers more flexibility and control, along with
additional features like job monitoring and automatic retry on failure.
• Simplicity: For simpler, less frequent job sequences, JES2/JES3 dependencies might suffice.
Implementing Dependencies
When implementing dependencies, ensure that:
• Jobs are correctly named or identified so dependencies are clear and correctly configured.
• The environment (JES2, JES3, or external scheduler) is properly set up to handle these
dependencies.
• You have error handling or notification mechanisms in place in case a job fails, affecting
dependent jobs.
Using these methods, you can control the execution order of JCL1, JCL2, and JCL3 as required.
In JCL (Job Control Language) on IBM mainframes, the concept of setting a job's priority directly in the JCL
using a keyword like PRIORITY is not standard. Job priority in JCL is typically managed by the Job Entry
Subsystem (JES2 or JES3) and can be influenced by external job scheduling tools or system settings, but
not directly through a JCL keyword named PRIORITY.
However, you can influence the execution order or priority using a few different methods:
1. Class Parameter
The CLASS parameter in the JCL JOB statement can be used to categorize jobs into different workload
classes, which are often associated with different priorities. System administrators define these classes
and their associated priorities in the JES2 or JES3 configuration.
//JOBNAME JOB (ACCT),'DESCRIPTION’, CLASS=A
//STEP1 EXEC PGM=PROGRAM1
Classes are predefined by systems personnel and can have different characteristics, including execution
priority, which can indirectly act to prioritize jobs.
2. Job Scheduling Tools
Advanced job scheduling tools like IBM Tivoli Workload Scheduler, CA-7, or Control-M provide
functionalities to set job priorities more granularly and manage complex dependencies. These tools are
capable of handling job priorities based on various conditions and can override the basic prioritization
provided by JES.
3. MSGCLASS
Though not directly related to execution priority, MSGCLASS controls the output class for sysout data, and
different MSGCLASS settings can sometimes be perceived as prioritizing jobs by controlling how and
where output is routed.
4. Using JES Commands
System operators can alter job priority dynamically using operator commands if the JES setup and
security policies permit such actions. These are typically executed on the JES console and can change job
execution order based on current system conditions and requirements.
5. JES2/JES3 Priority
In JES2 and JES3, you can also manage priorities through JES-specific parameters like job classes and
priority numbers in the job entry parameters. These settings usually require administrative access to
configure and manage.
//JOBNAME JOB (ACCT),'DESCRIPTION’, PRTY=8
(Note: PRTY is valid in certain setups for setting priority but is not universally applicable and depends on
specific system configuration.)
Conclusion
While there is no direct PRIORITY keyword in standard JCL for setting job priorities, the combination of
job classes, system configurations, and external tools provides comprehensive control over job
prioritization and execution order. Always consult with your mainframe system administrator or check
your specific mainframe documentation, as implementations and available features can vary significantly
between installations.
*********************************************************************************************************
In the context of mainframe computing, a checkpoint is a mechanism used to save the state of a program
or a batch job at certain intervals. This is useful for recovering from failures without having to restart the
job from the beginning. Checkpointing helps in minimizing data loss and job execution time in case of
interruptions such as system failures, job abends, or other anomalies.
Checkpoint Constraints refer to the limitations or conditions under which checkpoints can be
implemented or are effective. These constraints might include:
1. Performance Overhead: Implementing checkpoints can introduce additional I/O operations as
data needs to be saved periodically. This can affect the overall performance of the system.
2. Storage Requirements: Storing checkpoint data requires extra disk space. This storage must be
managed efficiently to ensure it does not impact other system operations.
3. Complexity: Managing checkpoints, especially in large and complex systems, can add to the
complexity of the application. Developers need to carefully design the checkpointing logic to
ensure it does not interfere with the normal functioning of the application.
4. Recovery Time: While checkpoints reduce the recovery time in case of a failure, the time taken to
recover can still be significant depending on the frequency of checkpoints and the amount of data
to be processed.
Usage of Checkpoints: Checkpoints are used in scenarios where reliability and data integrity are critical.
Here are a few examples:
• Batch Processing Jobs: In large batch processing, checkpoints can be used to save progress at
intervals. If the job fails, it can resume from the last checkpoint rather than starting over.
• Database Systems: Databases use checkpoints to flush dirty pages from memory to disk,
ensuring that the database can recover to a consistent state in case of a system crash.
• Long-Running Computations: In long-running computations, such as large simulations or data
analysis tasks, checkpoints can be used to save the state periodically to prevent loss of significant
computational work.
Overall, checkpoints are a crucial part of designing resilient and reliable systems in the mainframe
environment, particularly when handling large volumes of data or critical processing tasks.
Here are examples for each of the scenarios mentioned where checkpoints might be utilized:
1. Batch Processing Jobs
Example: A financial institution processes millions of transactions daily. A batch job is scheduled to run
every night, aggregating these transactions into reports.
Checkpoint Use: The job is designed to checkpoint every 30 minutes. If the job fails after 2 hours due to a
system error, it can resume processing from the 90-minute checkpoint, instead of starting over. This saves
time and computational resources.
2. Database Systems
Example: A large e-commerce platform uses a database to manage orders and customer information.
Checkpoint Use: The database system is configured to perform a checkpoint every hour. This involves
writing all modified (dirty) pages from memory to disk. If there is a power failure, the database can be
restored to the state of the last checkpoint, ensuring data integrity and reducing recovery time.
3. Long-Running Computations
Example: A research institution runs a complex simulation that models climate change scenarios. The
simulation is expected to run continuously for several weeks.
Checkpoint Use: The system is set up to write a checkpoint once every day. Each checkpoint contains the
current state of the simulation, including variable values and intermediate results. If the simulation
process is interrupted due to hardware failure, the simulation can resume from the last saved state,
preventing the loss of weeks of computational effort.
In each of these cases, checkpoints help to manage the risk of data loss and reduce downtime in critical
systems. They are especially valuable in environments where jobs are long-running or deal with
significant amounts of data, typical of many mainframe applications.
Here are more specific examples involving JCL (Job Control Language) and COBOL programming on
mainframes, where checkpoints are used:
Example in a JCL Job
Scenario: A JCL job is running a nightly batch process to update customer records based on the day's
transactions.
Checkpoint Use: The JCL job might invoke a COBOL program that processes records and uses checkpoints
to handle failures. Here’s a simplified version of how it might look:
//JOBNAME JOB (ACCT),'UPDATE PROCESS',MSGCLASS=X
//STEP01 EXEC PGM=UPDATEREC,REGION=4M
//STEPLIB DD DSN=...load library...
//SYSUDUMP DD DSN=...dump dataset...,DISP=(NEW,DELETE)
//INPUT DD DSN=...input file...
//OUTPUT DD DSN=...output file...
//SYSCHKP DD DSN=...checkpoint dataset...,DISP=(NEW,DELETE)
//SYSOUT DD SYSOUT=*
In this setup, SYSCHKP could be used by the COBOL program to read/write checkpoint data.
Example in a COBOL Program
Scenario: A COBOL program processes a large input file and updates a database, using checkpoints to
manage long-running operations and failures.
Checkpoint Use:
IDENTIFICATION DIVISION.
PROGRAM-ID. UPDATEREC.
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT INPUT-FILE ASSIGN TO DDNAME INPUT.
SELECT OUTPUT-FILE ASSIGN TO DDNAME OUTPUT.
SELECT CHECKPOINT-FILE ASSIGN TO DDNAME SYSCHKP.
DATA DIVISION.
FILE SECTION.
FD INPUT-FILE.
01 INPUT-RECORD PIC X(100).
FD OUTPUT-FILE.
01 OUTPUT-RECORD PIC X(100).
FD CHECKPOINT-FILE.
01 CHKPT-REC.
05 LAST-KEY PROCESSED PIC X(10).
WORKING-STORAGE SECTION.
01 WS-LAST-KEY PROCESSED PIC X(10) VALUE SPACES.
PROCEDURE DIVISION.
BEGIN.
OPEN INPUT INPUT-FILE OUTPUT OUTPUT-FILE
PERFORM UNTIL END-OF-FILE
READ INPUT-FILE INTO INPUT-RECORD
AT END
MOVE 'YES' TO END-OF-FILE
NOT AT END
PERFORM PROCESS-RECORD
MOVE INPUT-RECORD TO LAST-KEY PROCESSED
WRITE CHKPT-REC FROM LAST-KEY PROCESSED
END-READ
END-PERFORM.
CLOSE INPUT-FILE OUTPUT-FILE CHECKPOINT-FILE.
STOP RUN.
PROCESS-RECORD.
CONTINUE.
In this COBOL example:
• 04: End of file reached. This occurs when reading a file, and there are no more records to read.
• 05: The file is not sorted in ascending or descending order as required by the program.
• 10: End of file reached on a write operation or no room left on the device.
• 22: An attempt to write a duplicate key to a file that is indexed with unique keys.
• 23: Record not found. This status code appears when attempting to access a record that does not
exist in the file.
• 30: Permanent I/O error. This generally indicates hardware or software issues, such as an
inability to write data due to a full disk.
• 34: Boundary violation. This error occurs when a record is written outside the boundaries of the
file.
• 48: An attempt was made to write to a file that was opened in the input mode.
• 91: Password failure. Incorrect password provided.
• 92: Logical error. A catch-all for other errors not explicitly defined by other codes.
• 93: Resource limit exceeded. This might occur if there are too many files open concurrently.
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT OPTIONAL INPUT-FILE ASSIGN TO 'EMPLOYEE.DAT'
ORGANIZATION IS LINE SEQUENTIAL
FILE STATUS IS WS-FILE-STATUS.
DATA DIVISION.
FILE SECTION.
FD INPUT-FILE.
01 EMPLOYEE-RECORD PIC X(100).
WORKING-STORAGE SECTION.
01 WS-FILE-STATUS PIC XX.
PROCEDURE DIVISION.
OPEN INPUT INPUT-FILE.
IF WS-FILE-STATUS NOT = '00'
DISPLAY 'Error opening file, status: ', WS-FILE-STATUS
ELSE
PERFORM UNTIL WS-FILE-STATUS = '10'
READ INPUT-FILE INTO EMPLOYEE-RECORD
AT END
MOVE '10' TO WS-FILE-STATUS
NOT AT END
DISPLAY EMPLOYEE-RECORD
END-READ
END-PERFORM
END-IF.
CLOSE INPUT-FILE.
In this example, WS-FILE-STATUS captures the status of file operations, allowing the program to handle
errors and end-of-file conditions gracefully.
*********************************************************************************************************
DB2 SQL error codes are essential for identifying and resolving issues encountered during SQL operations
in IBM's DB2 database management system. Below are some of the main and important DB2 SQL error
codes that database administrators and developers frequently encounter:
Common DB2 SQL Error Codes
• SQLCODE -104: SQL syntax error. This occurs due to incorrect SQL statements (e.g., missing
keywords, incorrect tokens).
• SQLCODE -204: Name not found. This error means the object (table, view, etc.) does not exist in
the database.
• SQLCODE -206: Column not found. This error is thrown when a specified column is not found in
any table of the SQL statement.
• SQLCODE -302: Data value too large. This error occurs when a data value in a column is too large
to be stored.
• SQLCODE -305: Null indicator needed. This code is returned when an attempt is made to fetch a
NULL value into a variable that is not handled properly.
• SQLCODE -407: Null value not allowed. Occurs when trying to insert a NULL value into a column
that does not accept NULLs.
• SQLCODE -501: Cursor not open. This error is encountered when trying to perform an operation
that requires an open cursor, but the cursor is closed.
• SQLCODE -502: Cursor already open. This occurs when an attempt is made to open an already
open cursor.
• SQLCODE -503: Cursor not positioned. The cursor is not positioned on a row, thus the operation
cannot be executed.
• SQLCODE -551: Authorization failure. Occurs when a user does not have the necessary
permission to access or manipulate a database object.
• SQLCODE -803: Duplicate key error. This error happens when an insert or update statement
attempts to insert or update a row with a duplicate key in a unique or primary key constraint or
unique index.
• SQLCODE -805: Package not found. This typically occurs when a needed DB2 package is not
found.
• SQLCODE -811: More than one row retrieved in a SELECT INTO. This error occurs when a
SELECT INTO statement returns more than one row, but the context requires exactly one row.
• SQLCODE -818: Timestamp mismatch. This error is related to the consistency tokens of the
application and the database not matching, generally due to a binding issue.
• SQLCODE -904: Resource unavailable. A resource needed to execute a statement is not available,
possibly due to a lock on a resource.
• SQLCODE -911: Deadlock or timeout. Indicates a deadlock or timeout has occurred with SQL
statement execution.
• SQLCODE -922: Authorization needed. This error occurs when access to a particular database
object is denied.
• SQLCODE -1013: Connection not established. This error indicates that the connection to the
database cannot be established.
Example of Handling SQL Error Codes in DB2
EXEC SQL
INSERT INTO employees (id, name) VALUES (10, 'John Doe')
END-EXEC;
*********************************************************************************************************
In the context of mainframe computing, particularly when dealing with programming languages like
COBOL or with database systems like DB2, the terms "qualified" and "unqualified" refer to the specificity
with which a data item or a database object is referenced.
Qualified Reference
A qualified reference specifies a data item or object with its full hierarchical context. This is important in
scenarios where there might be ambiguity due to the presence of similarly named items in different
scopes or hierarchies.
In COBOL:
A qualified data item name in COBOL includes the data item’s name and its higher-level group names, up
to the highest level of the data structure. This is used to uniquely identify data items that have the same
name but exist in different group variables.
Example:
01 CUSTOMER-RECORD.
05 CUSTOMER-ID PIC X(10).
05 ORDER.
10 ORDER-ID PIC X(10).
10 PRODUCT-ID PIC X(10).
01 VENDOR-RECORD.
05 ORDER.
10 ORDER-ID PIC X(10).
10 PRODUCT-ID PIC X(10).
This SQL statement specifically requests records from the employees table in schema1, distinguishing it
from an employees table in another schema.
Unqualified Reference
An unqualified reference specifies a data item or object without detailing its hierarchical context. This is
simpler and can be used when there is no ambiguity about the reference.
In COBOL:
An unqualified name refers directly to a data item without specifying its group or higher-level qualifiers.
Example:
*********************************************************************************************************
In the context of mainframe programming, particularly with languages like COBOL, the INCLUDE and
COPY statements are used to insert code from external sources into a program. However, they are used in
slightly different contexts and have distinct purposes.
COPY Statement
The COPY statement is predominantly used in COBOL to include external source code or data descriptions
into a COBOL program. It functions at compile time, where specified portions of code or data structures
are literally copied into the place where the COPY statement appears within the COBOL source code. This
is particularly useful for reusing code, such as standard data declarations, procedural code, and SQL
statements.
Example of COPY in COBOL:
IDENTIFICATION DIVISION.
PROGRAM-ID. MainProgram.
DATA DIVISION.
WORKING-STORAGE SECTION.
COPY 'CUSTOMER.CPY'.
PROCEDURE DIVISION.
BEGIN.
DISPLAY CUSTOMER-NAME.
...
END PROGRAM MainProgram.
In this example, CUSTOMER.CPY might contain common data definitions related to a customer that are
used across multiple programs.
INCLUDE Statement
The INCLUDE statement, on the other hand, is more commonly associated with SQL and DB2 embedded
SQL in COBOL programs. It is used to include SQL statements or fragments stored in external files into an
SQL or COBOL program. The DB2 precompiler processes INCLUDE statements during the precompilation
of embedded SQL to include the contents of the specified file at that point in the SQL code.
Example of INCLUDE in SQL/DB2:
*********************************************************************************************************
In the context of COBOL programming, STOP RUN, END, and TERMINATE are control statements used to
signify the end of processes or blocks of code, but they serve different purposes and contexts:
STOP RUN
The STOP RUN statement is used in COBOL to terminate the program. It marks the end of the program
execution and returns control to the operating system or the environment that called the program. It is
typically used in the main program or in a subroutine to indicate that there is no more code to execute and
that all processing should cease.
Example:
PROCEDURE DIVISION.
DISPLAY 'Hello, World!'.
STOP RUN.
In this example, after displaying "Hello, World!", the program execution will stop, and control will be
returned to the operating system.
END
The END keyword is used in various contexts in COBOL to denote the termination of a block of code. It is
not a standalone statement to end program execution but is used to close structures like IF statements,
EVALUATE statements, or program and procedural blocks.
Example:
IF A = B THEN
DISPLAY 'A equals B'.
END-IF.
Here, END-IF marks the end of the IF block.
TERMINATE
The TERMINATE statement is used in COBOL in conjunction with report writer features, specifically to
terminate the processing of one or more report groups defined in the REPORT section. It cleans up and
finalizes the processing of reports before the program continues or stops.
Example:
In this example, TERMINATE PrintReport is used to end the processing of the PrintReport report group
before the program itself is ended with STOP RUN.
Summary:
• STOP RUN: Ends the entire program and returns control to the system.
• END: Closes a syntactic block in COBOL, such as conditional, loop, or a program/procedure
division.
• TERMINATE: Specifically used to conclude the processing of report groups in COBOL's report
writer feature.
Understanding these differences helps ensure the correct management of program flow, block definitions,
and report processing in COBOL applications.
*********************************************************************************************************
What will happen to file size if I change a COMP-3 field to Decimal field.
COMP-3 to Decimal Field: Increasing the file size, as COMP-3 is a packed format (more space-efficient)
compared to a standard Decimal which typically uses one byte per digit.
COMP-3 to Decimal Field: No direct code, as this is a design decision affecting how data is defined in
COBOL:
01 numeric-field PIC 9(5) COMP-3. /* Packed Decimal */
01 numeric-field PIC 9(5). /* Unpacked Decimal */
*********************************************************************************************************
What are AC error codes in IMSDB call.
*********************************************************************************************************
How to rename the flat fine in JCL.
Direct renaming of files isn't supported in JCL. You have to copy the file to a new dataset with the desired
name and optionally delete the old dataset.
//STEP01 EXEC PGM=IEBGENER
//SYSUT1 DD DSN=old.dataset.name,DISP=SHR
//SYSUT2 DD DSN=new.dataset.name,DISP=(NEW,CATLG,DELETE),
// SPACE=(TRK,(1,1)),UNIT=SYSDA
//SYSPRINT DD SYSOUT=*
//SYSIN DD DUMMY
Is STOP RUN mandatory in COBOL program.
No, STOP RUN is not mandatory but is standard for ending the program cleanly. Without it, control returns
to the calling environment without explicit termination, which might lead to unpredictable results.
*********************************************************************************************************
What will happen if CURSOR is not closed in COBOL program.
Potentially leads to resource leakage and locks on the database which might not be released, affecting
other operations and performance.
*********************************************************************************************************
How to test a PROD flow by providing the test data file in batch handling.
Redirect the file paths in JCL to point to test data instead of production data. Use conditional JCL
statements or modify the DD statements temporarily.
*********************************************************************************************************
If there is a job / module to generate monthly report, and we noticed that some of the records we
passed are not processed or not in output file. How can we get those record back into the output
file which is already generated and how can we handle this case in future.
Investigate why records were missed (logic error, data quality), correct the issue, and reprocess the data.
Implement data validation and error handling in the program to capture and log such events in the future.
*********************************************************************************************************
Can we build a JSON in Cobol and call API from Cobol?
Yes, newer COBOL compilers support JSON generation and parsing. API calls can be made from COBOL
using sockets or external libraries, depending on the system.
*********************************************************************************************************
If a job abends with partial update in DB2 table what will be the result and how to handle this
situation.
If the job abends, the transaction should be rolled back if it's enclosed in a transaction block, maintaining
data integrity. Implement error handling and ensure transactions are used correctly.
*********************************************************************************************************
If a job fails at some certain point / record while processing them and I need to restart the job
again. Here the on-call person doesn’t have the idea of which point it got failed or which is the last
record which got successfully updated. What is the solution here.
Implement checkpoint/restart logic in your job. Store progress markers at intervals so a restart can begin
from the last known good state.
WRITE checkpoint-record FROM data-item.
*********************************************************************************************************
What is qualified and unqualified call in IMSDB.
A qualified call specifies the full path to the segment in IMS DB, whereas an unqualified call refers to
segments relative to the current position or under current parent segments.
CALL 'CBLTDLI' USING FUNCTION, PCB, SEGMENT-QUALIFIED
CALL 'CBLTDLI' USING FUNCTION, PCB, SEGMENT-UNQUALIFIED
*********************************************************************************************************
I have input file with records of employees who works in different department, I need to build an
output file in such a way that I need a header with department name and followed by particulate
department entries. Then next department follows the structure. How to achieve this in COBOL as
well as in JCL.
Read input, sort by department, and process in COBOL using a control break logic. In JCL, use SORT utility
to sort the file first based on the department.
READ file
IF department NOT = previous-department
WRITE header
END-IF
*********************************************************************************************************
I have a file having customers record in input file which is having age as one value, I need to split
this file into multiple files based on the age, example age 45 in one file, age 50 in one file and age
greater than 50 in another file. How to achieve this.
Read the input file, evaluate the 'age' field, and write to different output files based on conditions checked
for the age in COBOL.
IF age = 45
WRITE file-45 FROM record
ELSE IF age = 50
WRITE file-50 FROM record
ELSE IF age > 50
WRITE file-over-50 FROM record
END-IF
You might set up a JCL job to split an input file into three different output files based on the age field.
Assume the age is in positions 21-22 of each record:
//SPLTAGE JOB (ACCT),'SPLIT BY AGE',CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//SPLIT EXEC PGM=SORT
//SYSOUT DD SYSOUT=*
//SORTIN DD DSN=your.input.dataset,DISP=SHR
//AGE45 DD DSN=your.output.dataset.age45,DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(1,1)),UNIT=SYSDA
//AGE50 DD DSN=your.output.dataset.age50,DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(1,1)),UNIT=SYSDA
//OVER50 DD DSN=your.output.dataset.over50,DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(1,1)),UNIT=SYSDA
//SYSIN DD *
OPTION COPY
OUTFIL FNAMES=AGE45,INCLUDE=(21,2,CH,EQ,C'45'),
OUTREC=(1,20,21,2,23:23)
OUTFIL FNAMES=AGE50,INCLUDE=(21,2,CH,EQ,C'50'),
OUTREC=(1,20,21,2,23:23)
OUTFIL FNAMES=OVER50,INCLUDE=(21,2,CH,GT,C'50'),
OUTREC=(1,20,21,2,23:23)
/*
*********************************************************************************************************
How to get the unique / distinct record in JCL output file.
Use SORT with SUM FIELDS=NONE, and make sure the key fields define uniqueness.
//SORTIN DD DSN=input.file,DISP=SHR
//SORTOUT DD DSN=output.file,DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(1,1)),UNIT=SYSDA
//SYSIN DD *
SORT FIELDS=(key-field,CH,A)
SUM FIELDS=NONE
/*
*********************************************************************************************************
How to select unique / distinct records into output from flat file in COBOL.
Read through the file, store or track key data, and process/write only if the key has not been encountered
before.
IF current-key NOT = previous-key
WRITE record TO output-file
MOVE current-key TO previous-key
END-IF
*********************************************************************************************************
In DB2 database CUSTOMER number is unique but customer name is not, I need to get the list
based on the customer’s name in Cobol. How to achieve this.
Read DB2 data sorted by name, process in COBOL by reading and displaying/storing names as needed.
EXEC SQL
SELECT name INTO :name-variable
FROM customers
WHERE customer-name = :input-name
END-EXEC
*********************************************************************************************************
what is the size of v9(9)v3 and v (9).9
v9(9)v3 is not standard notation. V9(9)V3 implies a numeric value with 12 digits total (9 before and 3
after a virtual decimal). V (9).9 likely is a typo; 9(9)V9 would be 18 total digits, 9 on each side of decimal.
*********************************************************************************************************
I have a GDG base and using this to create a GDG version I am generating one file every day. One
day you need to include one extra field in output file. In this case where will you change the length
of the file.
Modify the COBOL program or the file definition in the JCL that writes this output to include the extra
field. Adjust record lengths accordingly.
*********************************************************************************************************
Can we have two JCL card
If referring to multiple DD cards for the same dataset, yes, you can have multiple DD statements pointing
to different or same datasets.
*********************************************************************************************************
I want to refer to lower region load while running a PROD JCL to test the newly made changes,
without changing the PROD JCL how to point the JCL to TEST load.
Use a symbolic or an override JCL that points to the test load libraries without changing the PROD JCL
directly.
//STEPLIB DD DSN=your.test.loadlib,DISP=SHR
*********************************************************************************************************
//SPLTAGE JOB (ACCT),'SPLIT BY AGE',CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//SPLIT EXEC PGM=SORT
//SYSOUT DD SYSOUT=*
//SORTIN DD DSN=your.input.dataset,DISP=SHR
//AGE45 DD DSN=your.output.dataset.age45,DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(1,1)),UNIT=SYSDA
//AGE50 DD DSN=your.output.dataset.age50,DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(1,1)),UNIT=SYSDA
//OVER50 DD DSN=your.output.dataset.over50,DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(1,1)),UNIT=SYSDA
//SYSIN DD *
OPTION COPY
OUTFIL FNAMES=AGE45,INCLUDE=(21,2,CH,EQ,C'45'),
OUTREC=(1,20,21,2,23:23)
OUTFIL FNAMES=AGE50, INCLUDE= (21,2, CH, EQ,C'50'),
OUTREC= (1,20,21,2,23:23)
OUTFIL FNAMES=OVER50, INCLUDE= (21,2, CH, GT, C'50'),
OUTREC= (1,20,21,2,23:23)
/*
Sample COBOL Program Structure:
IDENTIFICATION DIVISION.
PROGRAM-ID. AgeSplit.
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT INPUT-FILE ASSIGN TO 'INPUT'.
SELECT AGE-45-FILE ASSIGN TO 'AGE45'.
SELECT AGE-50-FILE ASSIGN TO 'AGE50'.
SELECT OVER-50-FILE ASSIGN TO 'OVER50'.
DATA DIVISION.
FILE SECTION.
FD INPUT-FILE.
01 INPUT-RECORD PIC X(100).
FD AGE-45-FILE.
01 AGE-45-RECORD PIC X(100).
FD AGE-50-FILE.
01 AGE-50-RECORD PIC X(100).
FD OVER-50-FILE.
01 OVER-50-RECORD PIC X(100).
WORKING-STORAGE SECTION.
01 AGE PIC 99.
PROCEDURE DIVISION.
OPEN INPUT INPUT-FILE
OUTPUT AGE-45-FILE
AGE-50-FILE
OVER-50-FILE
READ INPUT-FILE INTO INPUT-RECORD
AT END
GO TO CLOSE-FILES
NOT AT END
PERFORM PROCESS-RECORD
END-READ
CLOSE-FILES.
CLOSE INPUT-FILE
AGE-45-FILE
AGE-50-FILE
OVER-50-FILE
STOP RUN.
PROCESS-RECORD.
/* Logic to extract age from INPUT-RECORD and move to AGE */
EVALUATE TRUE
WHEN AGE = 45
WRITE AGE-45-RECORD FROM INPUT-RECORD
WHEN AGE = 50
WRITE AGE-50-RECORD FROM INPUT-RECORD
WHEN AGE > 50
WRITE OVER-50-RECORD FROM INPUT-RECORD
END-EVALUATE
READ INPUT-FILE INTO INPUT-RECORD
AT END
RETURN
NOT AT END
PERFORM PROCESS-RECORD
END-READ.
*********************************************************************************************************