8.2 Using TSAFSGW in a NetWare 6.x Cluster

In a clustering environment, TSAFSGW must be installed and loaded on each node from which your backup software backs up any portion of your GroupWise system. To accommodate the variable locations of data to back up from a clustered GroupWise system, the .ncf file for TSAFSGW on each node should be edited to include a /home startup switch for every domain and post office on every cluster-enabled volume that might ever be mounted on that node.

If you are using TSAFSGW, the /vserver switch enables you to specify the name of a virtual server in your NetWare cluster. Then you can use the /home switch to specify shared volumes and paths rather than physical volumes and paths. For example:

/vserver-clustername_volumename_SERVER
/home-volumename:\domaindirectory
/home-volumename:\postofficedirectory

For example, if you have a domain named NewYork and a post office named Engineering, with libraries named Design and Training, the switches for TSAFSGW would be:

/vserver-CORPORATE_GROUPWISE_SERVER
/home-gw:\gwsystem\newyork
/home-gw:\gwsystem\engineering

You can use this same configuration file on every node in the cluster. TSAFSGW can identify the libraries based on information provided by the post office. Your backup software would then list the following locations to back up, based on the previous example:

GroupWise Cluster System
[DOM] newyork
[PO]  engineering
[DMS] lib0001
[BLB] design_store
[DMS] lib0002
[BLB] training_store

From the list provided in your backup software, you can select GroupWise Cluster System to back up all the objects listed, or you can select individual objects for backup.

Because TSAFSGW must be installed and loaded on all nodes in the cluster, it can connect to the virtual server from all nodes. If the node where TSAFSGW is performing a backup goes down, that node is dismounted and the next node is mounted to the virtual server. However, because TSAFSGW creates its list of resources when it loads, it is not aware of the newly mounted node. It must be unloaded and reloaded in order to notify your backup software of the change. Your backup software acknowledges the disruption and attempts to reconnect to the next node. When the next node is fully up and responding, the backup recommences, starting with the resource that was being backed up when the disruption occurred.

To restore data in a clustering environment, you must run your backup/restore software on the node where the location to restore is currently mounted.