The GroupWise Target Service Agent for File Systems (TSAFSGW) is a GroupWise-specific API that works with compatible backup software to provide reliable backups of a running GroupWise system on NetWare 6.5. TSAFSGW can be used in a clustered GroupWise system with appropriate preparation and understanding of how the TSAs work. For background information about the GroupWise TSAs, see GroupWise 8 Administration Guide.
In a clustering environment, TSAFSGW must be installed and loaded on each node from which your backup software backs up any portion of your GroupWise system. To accommodate the variable locations of data to back up from a clustered GroupWise system, the .ncf file for TSAFSGW on each node should be edited to include a /home startup switch for every domain and post office on every shared volume that might ever be mounted on that node.
If you are using TSAFSGW, the /vserver switch enables you to specify the name of a virtual server in your NetWare cluster. Then you can use the /home switch to specify shared volumes and paths rather than physical volumes and paths. For example:
/vserver-clustername_volumename_SERVER /home-volumename:\domain_directory /home-volumename:\post_office_directory
For example, if you have a domain named NewYork and a post office named Engineering, with libraries named Design and Training, the switches for TSAFSGW would be:
/vserver-CORPORATE_GROUPWISE_SERVER /home-gw:\gwsystem\newyork /home-gw:\gwsystem\engineering
You can use this same configuration file on every node in the cluster. TSAFSGW can identify the libraries based on information provided by the post office. Your backup software would then list the following locations to back up, based on the previous example:
GroupWise Cluster System [DOM] newyork [PO] engineering [DMS] lib0001 [BLB] design_store [DMS] lib0002 [BLB] training_store
From the list provided in your backup software, you can selectto back up all the objects listed, or you can select individual objects for backup.
When TSAFSGW runs, it backs up the shared volumes that are currently accessible and skips shared volumes that are not currently accessible. If a shared volume migrates, you must restart TSAFSGW so that it can re-determine what shared volumes are currently available for backup.
TSAFSGW connects to the virtual server from all nodes in the cluster. If the node where TSAFSGW is performing a backup goes down, that node is dismounted and the next node is mounted to the virtual server. TSAFSGW on the next node is aware of this and notifies your backup software. Your backup software acknowledges the disruption and attempts to reconnect to the next node. When the next node is fully up and responding, the backup recommences, starting with the resource that was being backed up when the disruption occurred.
To restore data in a clustering environment, you must run your backup/restore software on the node where the location to restore is currently mounted.