IBM Tivoli Storage Manager Client Node Proxy Support and GPFS
Published 20 May 2005
Authors: Roland Tretau
This Technote describes IBM Tivoli Storage Manager Client node proxy support and GPFS. The GPFS team has written a routine that performs the following functions on a GPFS file system: scans entire file system for inode changes, creates a list of files that have changed, and parcels out the list of files to an IBM Tivoli Storage Manager Backup-Archive client node to move the data.
Backups of multiple nodes that share storage can be consolidated to a common target nodename on the Tivoli Storage Manager server. This configuration is useful when the machine that performs the backup can change over time, such as with a cluster. The asnodename option also allows data to be restored from a different system than the one that performed the backup.
- An agent node is a client node that has been granted authority to perform client operations on behalf of a target node.
- A target node is a client node that grants authority to one or more agent nodes to perform client operations on its behalf.
Figure 1: Backing up a GPFS cluster
Scheduling example for backing up a GPFS file system
Each client node authenticates with the server as the same node name, for example, node_gpfs. This authentication is done by having a dsm.sys file on each machine with the following entry:
The issue with this solution is that you can not manage the password expiration automatically. If there are three nodes in the GPFS cluster, each node knows the password for node_gpfs. If the server expires the password, then one node resets the password, and the other two node are no longer able to authenticate. The only solution to this issue is to either turn node authentication off at the Tivoli Storage Manager server or to reset the password manually and update all three nodes with the new password manually.
The Tivoli Storage Manager scheduler is not currently used in this solution but it can be easily seen that there could be a single schedule for node_gpfs which executes the file system scan/workload creating from one client machine via a macro. This schedule would be associated with one of the three nodes only, for example, node_1.
A better solution is to use multi-node support. Using the example of three nodes in the GPFS cluster that would participate in the backup, you would:
1. Define four nodes on the Tivoli Storage Manager server: node_1, node_2, node_3, and node_gpfs (as shown in the following example). In this example, node_1, node_2, and node_3 are used only for authentication. All file spaces are stored with node_gpfs.
REGISTER NODE node_1 mysecretpw
REGISTER NODE node_2 mysecretpw
REGISTER NODE node_3 mysecretpw
REGISTER NODE node_gpfs mysecretpw
2. Define a proxynode relationship between the nodes:
GRANT PROXYNODE TARGET=node_gpfs AGENT=node_1, node_2, node_3
3. Define the nodename and asnode name for each of the machines in their respective dsm.sys files:
4. Optionally, define a schedule for only node_1 to do the work:
DEFINE SCHEDULE STANDARD GPFS_SCHEDULE ACTION=MACRO OBJECTS="gpfs_script"
DEFINE ASSOCIATION STANDARD GPFS node_gpfs
5. On node node_gpfs, execute the schedule:
Note: You can only exploit this multiple node in a UNIX environment and not on Windows and NetWare Systems. The asnodename option is available on Windows systems. However, the benefit of using it is lessened because of the file space naming limitations with Windows systems.
This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment.
Follow IBM Redbooks
Follow IBM Redbooks