In addition to the requirements in Section 5.0, Planning for DST Shadow Volumes and Policies, your setup must meet the requirements in this section when you use DST in a Novell Cluster Services cluster.
The primary and secondary volumes must be able to fail over or cluster migrate together to other nodes in the cluster. Thus, a single DST pool cluster resource is used to manage the pair. Its resource scripts include commands that manage the two devices, pools, volumes.
The devices and pools that contain the primary volume and secondary volume in a clustered DST volume pair must be marked as shareable for clustering. The primary pool must be cluster-enabled for Novell Cluster Services. The secondary pool must be shared. You can cluster-enable the pool that contains the secondary volume, but its individual pool resource IP address and Cluster objects are not used in the load and unload scripts for the DST pool cluster resource.
In a cluster, the DST volume pair is defined in the ncpcon mount command of the load script for the DST pool cluster resource. When the resource is brought online, the volume is mounted, and an entry is added to the /etc/NCPVolumes file. When the resource is taken offline, the volumes are dismounted when the pools are deactivated, and the entry is removed.
<VOLUME> <NAME>VOL1</NAME> <PRIMARY_ROOT>/media/nss/VOL1</PRIMARY_ROOT> <SHADOW_ROOT>/media/nss/ARCVOL1</SHADOW_ROOT> </VOLUME>
In a cluster, the DST volume pair is defined with the ncpcon mount command in the load script for the DST pool cluster resource. You do not create a clustered DST volume by using the Dynamic Storage Technology Options page in Novell Remote Manager. When you bring the resource online on a node for the first time, a SHADOW_VOLUME line is automatically added to the /etc/opt/novell/ncpserv.conf file:
SHADOW_VOLUME primary_volumename secondary_volume_path
For example:
SHADOW_VOLUME VOL1 /media/nss/ARCVOL1
When the resource fails over or is cluster migrated to another node, the shadow volume definition remains defined on that server.
If you remove the shadow relationship from the cluster load script, the SHADOW_VOLUME entry is usually not needed in the /etc/opt/novell/ncpserv.conf file. To permanently unlink the two volumes, you must manually remove the line from the /etc/opt/novell/ncpserv.conf file and restart ndsd on each node. To disable clustering but keep the DST shadow volume pair on a specified node, you must manually remove the line from the configuration file and restart ndsd on all nodes except that one.
The EXCLUDE_VOLUME line in the /etc/opt/novell/ncp2nss.conf file prevents the secondary NSS volume from being mounted in NCP. The allows the secondary volume to be mounted for NSS and Linux, but not in NCP. The users access the files on the secondary volume via the merged view of the DST volume pair, not directly.
In a cluster, the DST volume pair is defined with the ncpcon mount command in the load script for the DST pool cluster resource. When you bring the resource online on a node for the first time, an EXCLUDE_VOLUME line is automatically added to the /etc/opt/novell/ncp2nss.conf file as well as the temporary exclusion table in cache on that node.
EXCLUDE_VOLUME secondary_volumename
For example:
EXCLUDE_VOLUME ARCVOL1
If you remove the shadow relationship from the cluster load script, the EXCLUDE_VOLUME entry is usually not needed in the /etc/opt/novell/ncp2nss.conf file. To permanently unlink the two volumes, you must manually remove the line from the /etc/opt/novell/ncp2nss.conf file and restart ncp2nss on each node. To disable clustering but keep the DST shadow volume pair on a specified node, you must manually remove the line from the configuration file and restart ncp2nss on all nodes except that one.
The DST shadow volume pair is defined by the following ncpcon mount command in the DST pool cluster resource’s load script. The volume pair is available only when the resource online.
exit_on_error ncpcon mount primary_volumename=volID,SHADOWVOLUME=secondary_volumename
Both NSS volumes must already exist. The mount location is /media/nss/<primary_volume_name>.
Replace volID with a volume ID that is unique across all servers in the cluster. Valid values are 0 to 254. By convention, the IDs are assigned from 254 and downward for clustered volumes.
When the primary volume has a state of
, the volume ID that you assign as its NCP volume ID represents the DST shadow volume pair of volumes. The secondary volume does not have a separate volume ID while it is in the shadow relationship.For example, the following command mounts the primary NSS volume named VOL1 with a volume ID of 254. The primary volume is mounted for NSS and NCP at /media/nss/VOL1. The secondary volume is an existing NSS volume named ARCVOL1. It is mounted for NSS at /media/nss/ARCVOL1.
exit_on_error ncpcon mount VOL1=254,SHADOWVOLUME=ARCVOL1
The secondary pool must be mounted before the primary pool. This helps to ensure that the pool is activated and available when the DST volume pair is mounted.
IMPORTANT:If the secondary volume is not available when the shadow volume pair is mounted, the cluster load script does not fail and does not provide a warning. The DST shadow volume is created and appears to be working when viewed from Novell Remote Manager. However, until the DST shadow volume is mounted, the files on the secondary volume are not available to users and appear to be missing in the merged file tree view. After the secondary volume has successfully mounted, the files automatically appear in the merged file tree view.
If you observe that the pools are slow to mount, you can add a wait time to the load script before the mount command for the shadow volume pair.
For example, you add a sleep command with a delay of a few seconds, such as:
sleep 10
You can increase the sleep time value until it allows sufficient time for the pools to be activated and the volumes to be mounted in NSS before continuing.
IMPORTANT:If wait times are added to the load script or unload script, ensure that you increase the script timeout settings accordingly. Otherwise, the script might time out while you are waiting for the action.
The primary pool must be deactivated before the secondary pool. This allows the DST volume pair to be dismounted before the secondary pool is deactivated.
The monitor script for the DST pool cluster resource has monitoring commands for the primary pool, secondary pool, the primary volume, and advertising protocols for the primary volume.
You should not monitor the secondary volume in the monitor script. The EXCLUDE_VOLUME line in the /etc/opt/novell/ncp2nss.conf file makes it unavailable to NCP. Thus, the ncpcon volume command that is used to check its status is not able to see the secondary volume.
Ensure that you remove or comment out the following line from the resource monitoring script:
exit_on_error ncpcon volume primary_volume_name #exit_on_error ncpcon volume secondary_volume_name
If you add a volume to the primary pool for a clustered DST volume pair, the mount command is added twice in the primary pool’s cluster load script, once after the primary pool’s activation command and once after the secondary pool’s activation command. You must manually delete the instance that occurs after the secondary pool’s activation, then offline and online the primary pool cluster resource to apply the modified load script.
For information, see Adding a Volume to a Clustered Pool
in the OES 2 SP3: Novell Cluster Services 1.8.8 Administration Guide for Linux.
In a cluster, the DST policies must be available on every node where a clustered DST pool cluster resource is brought online. As a best practice, you should create policies at the volume level for each clustered DST volume pair so that the volume’s policies fail over with it when its DST pool cluster resource fails over or is cluster migrated to a different node.
Global policies are NCP Settings for DST that you set at the server level. They govern how DST behaves for all DST volume pairs mounted on the server. Global policies are not cluster aware.
Ensure that the same global DST policies are configured on each node where you want to fail over the DST pool cluster resources. To manage a global DST policy, open Novell Remote Manager for Linux by using the IP address of the node. For information, see Section 8.0, Configuring DST Global Policies.
IMPORTANT:Whenever you modify global policies on a given node in the cluster, you must make those same changes on the other nodes.
An all-shadow-volumes policy applies to any DST volume that is mounted on that server when the policy runs. All-shadow-volumes policies are not cluster aware.
If you select /usr/novell/sys/._NETWARE/shadow_policy.xml file. Ensure that the same all-shadow-volumes policies are configured on each node where you want to fail over the DST pool cluster resources. You can create the same all-shadow-volumes policies on each node in the cluster, or you can create it on one node and copy the shadow_policy.xml file to all nodes where you plan to bring the DST pool cluster resource online. To manage an all-shadow-volumes policy, open Novell Remote Manager for Linux by using the IP address of the node.
when you create a policy, the policy information is stored in theIMPORTANT:Whenever you modify all-shadow-volumes policies on a given node in the cluster, you must make those same changes on the other nodes.
Volume policies apply only to specified DST volume pair. Volume policies are not cluster aware. They are stored with the volume and are available automatically on any node where the DST pool cluster resource is brought online. When you set up volume policies, the DST pool cluster resource must be online and the DST volume pair must be mounted.
If a policy applies to a specific volume, the policy information is stored in the /media/nss/<primary_volumename>/._NETWARE/shadow_policy.xml file. This file is stored on the volume itself and thereby automatically follows the volume as its DST pool cluster resource is failed over or cluster migrated to a different node. To manage a volume policy, open Novell Remote Manager for Linux by using the IP address of the resource or by using the IP address of the node where the resource is currently active.