Managing output is essential to controlling the number of jobs on the system. IBM works with customers regarding issues with having a large numbers of jobs. IBM has found that generally most systems have a relatively small number of active jobs but very large numbers of jobs with spooled output. In fact, the number of jobs that have ended with spooled output is often 10 to 100 times the number of active jobs.
IBM has no plans to increase the maximum number of jobs on the system. For environments where the number of jobs is near the maximum that is allowed, it is essential to change how output is managed, which means using the capability to detach spooled files. This function has existed on the IBM i operating system since V5R2. In addition, V5R4 included enhancements to make it easier to find and manage detached spooled files with the WRKSPLF and WRKJOBLOG commands.
In this technote we give some best practices on managing output on IBM i.
Dawn May, Angela Newton, Mike Russell, Dan Tarara, and Kevin Vette
IBM Rochester Development Team
- QSPLFACN: Spooled file action
Using the QSPLFACN system value and job attribute can be beneficial in keeping down the number of job table entries on the system. If this value is set to *DETACH, when the job ends, the job is removed from the system, and the job table entry is freed. If the value is set to *KEEP (the default), all jobs are kept until their spooled files are all deleted. If running out of job table entries is an issue for you, it is recommended that you set the QSPLFACN (or the job attribute SPLFACN) value to *DETACH. For existing spooled files, you can use the Change Job (CHGJOB) command to remove the job without deleting spooled files that you need to keep.
One of the consequences of using *DETACH is that the job log (and other spooled files that the job creates) cannot be found or accessed using the job commands (WRKJOB, CHGJOB, HLDJOB, and so forth). However, with V5R4, there are two methods to find spooled files using the job name:
The WRKJOBLOG command allows users to find both spooled and pending job logs using the job name (including generics). In addition, the WRKJOBLOG allows a date and time range to further subset the list of available job logs. In V6R1, you can use the F13=Delete all key to delete the entire list of job logs that display.
The WRKSPLF command with the JOB option allows the user to show just the spooled files for a specific job or a set of jobs with the same or similar names because generics are again supported.
Another possible issue with using *DETACH is that if you use the same job names over and over, if you use enough jobs, and if your spooled files stay around long enough, you can end up with spooled files with the exact same job name (including the job user and job number), spooled file name, and spooled file number. Older applications that find or work with specific spooled files and only use these parameters to locate a spooled file can have problems with these duplicates. Additional fields added to all spool interfaces allow users to compensate for this situation by specifying the create date and time to get the exact spooled file desired. In addition, all interfaces for finding and listing spooled files return these values so that they can be used programmatically. Older applications might need some updates to handle this situation.
- QMAXSPLF: Maximum number of spooled files
The QMAXSPLF system value sets the maximum number of spooled files that can be spooled under a single job. The default value is 9999, but you can increase it up to 999999. The default value can be useful in limiting a runaway job from producing too many spooled files, but at times certain jobs might need the maximum increased. This situation is especially true if many spooled files on the system are created by server jobs or jobs run under the authority of a swapped to user. In these cases, the spooled files that are generated are placed under a job with the job name of QPRTJOB and the user name of the swapped to user. If these jobs generate a lot of spooled files, you must create a new QPRTJOB each time the maximum set by the QMAXSPLF system value is reached. Reaching the maximum set by the QMAXSPL system value can cause additional jobs to be created, which creates a new job causes a creation of an entry in the job table. Increasing the QMAXSPLF value can allow more spooled files for each of these jobs and can keep the system from creating as many QPRTJOBs.
If you increase the QMAXSPLF beyond the 9999 default limit, any applications that use the spooled file number to find or work with a spooled file must have a large enough (more than four characters or large enough binary) field to hold the spooled file number. Otherwise, they can fail.
- QRCLSPLSTG: Reclaim spool storage
The QRCLSPLSTG system value controls how long unused database members, which store spooled files, are allowed to stay in the system after the spooled file that uses them is deleted. The allowed values are *NONE, *NOMAX, and a number of days form 1 to 366. This value can make a significant difference in the performance when creating large numbers of spooled files on a system. The value also can limit the amount of space that the system uses due to keeping empty database members on the system, which can be used later to store spooled files.
If this value is set to *NONE, every time a spooled file is deleted, the database member that stores the spooled file is deleted, and when another spooled file is created, a new database member must be created to store it. While this method minimizes the storage that the system uses to store spooled files, creating the database member can cause performance problems due to the extra time that creating the member takes. It also can cause locking problems on the database files in QSPL.
Do not set the *NONE value unless you are directed to do so by an IBM support person. If you set the value to *NOMAX, spooled database members are never deleted, even when empty. This value assures that most of the time, a database member is available for the system to use whenever a spooled file is created, thus speeding up the time it takes to create the member and avoiding any locking on the database file in which the member exists. After the spooled file is deleted, the data is cleared and drops the size to about 16 KB per database member. However, with large numbers of database members, system storage usage can be a problem still. A value between 1 to 366 allows the system to delete excess members that perhaps are created due to a runaway job while still allowing a member to be available most of the time.
The best value to use for this system value is the number of days between normal peak usage of spooled files plus one. For example, if you run some weekly reports that generate a lot of spooled files, eight days is the correct value for QRCLSPLSTG. This value allows the weekly peak number of database members (7 + 1) to stay on the system long enough to be available for the next weekly run. However, if for some reason even more spooled files are generated occasionally, the database members that are created to hold these spooled files are deleted after eight days, retuning the number of database members to a normal value.
For more information about this system value and the RCLSPLSTG command, consult the IBM i Information Center.
- QJOBSPLA: Spooling control block initial size
The QJOBSPLA system value controls the initial size of the spool control block that is allocated for each job on the system. In most cases, the default value of 3516 bytes is fine unless the majority of the jobs have more than five inline spooled files, which are the files that are created for inline data in submitted (reader) jobs. This value does not affect the use of spooled output files, and you should not increase it based on the spooled output files that are produced.
The system values that manage spooled output include:
This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment.