With z/OS® V1R7, DFSMShsm™ has been enhanced to support the new large format sequential data sets. This Technote provides an overview of this function.
DFSMShsm support of large format data sets
DFSMShsm has been enhanced to support the new large format sequential data sets. Large format data sets can have greater than 65,535 tracks per volume. Before z/OS V1R7, most sequential data sets were limited to 65,535 tracks on each volume, although most hardware storage devices supported far more tracks per volume. To support this hardware capability, z/OS V1R7 enables users to create new large format data sets, which are physical sequential data sets with the ability to grow beyond the previous size limit. Large format data sets are not required to have greater than 65,535 tracks at initial allocation, but when allocated in large format they can expand to greater than 65,535 tracks. z/OS V1R7 also introduces the concept of a basic format data set to designate a sequential data set that is neither extended format nor large format. A basic format data set therefore may not have any more than 65,535 tracks per volume. Large format data sets are supported by DFSMShsm in the same manner as basic format data sets in typical operations involving:
- Migration and recall
- Backup and recovery
- ABACKUP and ARECOVER
Large format journal data set
DFSMShsm also supports the use of a large format data set for its journal data set. A larger journal data set can allow more DFSMShsm activity to take place between journal backups and helps to minimize occurrences of journal full conditions. After all systems in the HSMplex are at z/OS V1R7, the following procedure can be followed to migrate the journal data set to a large format data set:
- Shut down all but one of the DFSMShsm subsystems in the HSMplex.
- Either hold all DFSMShsm functions or set DFSMShsm to emergency mode.
- Back up the control data sets using the BACKVOL CDS command. This creates a backup of the journal and then nulls the journal.
- Stop the remaining DFSMShsm subsystem.
- Rename the current journal data set.
- Allocate the new large format journal data set
- Ensure that the JOURNAL DD statement in the DFSMShsm startup procedure points to the new journal data set.
- Restart the DFSMShsm hosts.
Should you need to fall back to using a basic format journal data set, the same procedure as above can be used to replace the current large format data set with a new basic format data set.
Note that if the used space in the journal grows as a consequence of the data set being allocated with a larger size, journal backups will take longer to complete due to DFSMShsm having to process a larger data set. This will increase the length of time that DFSMShsm functions are held during CDS backup. In addition, increasing the size of the journal might cause the journal backup to fail as the space allocated to its backup copies could no longer be enough. To prevent this from happening, rename your current journal backup data sets, and create new backup data sets of sufficient size. The backup data sets may be defined as large format data sets.
A coexistence PTF is required so that lower-level releases of DFSMShsm will fail operations involving large format data sets. Because large format data sets cannot be processed on hosts prior to z/OS V1R7, care must be taken when migrating data sets in an HSMplex where not all hosts are at this level. Any attempts to recall these data sets from a lower-level system will fail, which is especially problematic for implicit recalls. Similarly, attempts to recover large format data sets will fail on pre-z/OS V1R7 hosts, and large format data sets included in an ABARS aggregate can only be restored using ARECOVER if the recovery system is at z/OS V1R7.
In order to use a large format journal data set, all systems in the HSMplex must be at z/OS V1R7. The toleration PTF ensures that when a lower-level release of DFSMShsm attempts to open a large format journal data set, message ARC0509E is issued and DFSMShsm is stopped. Refer to APAR OA08865 for the correct toleration PTF.
This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment.