15.10 Removing the Shadow Relationship for a Clustered DST Volume Pair

Removing a clustered DST volume pair removes the shadow relationship between the primary and secondary storage area. You remove commands in the primary pool’s cluster scripts that you added to manage the secondary pool and volume. Removing the shadow relationship does not remove the underlying volumes themselves. The files remain on whichever storage area they are on at the time when you remove the shadow relationship.

15.10.1 Planning to Remove the Shadow Relationship for a Clustered DST Volume Pair

As you plan to remove the shadow relationship for a clustered DST volume pair, consider the following short service outages that are involved:

15.10.2 Preparing to Remove a Shadow Relationship

Removing a shadow relationship does not automatically move files in either direction between the two volumes. The files remain undisturbed. The volumes function independently after the shadow relationship is successfully removed. Ensure that the files are distributed as desired before you remove the shadow relationship.

To move files between the two volumes to achieve a desired distribution of files:

  1. In Novell Remote Manager for Linux, log in as the root user.

  2. Select View File System > Dynamic Storage Technology Options, locate the volume in the list, then click the Inventory link next to it.

    View the volume inventory for the shadow volume to determine the space in use and the available space for both the primary and the secondary areas of the shadow volume. Ensure that there is sufficient free space available in either location for the data that you plan to move to that location.

  3. Use any combination of the following techniques to move data between the two areas:

    • Shadow Volume Policies: Run an existing shadow volume policy by using the Execute Now option in the Frequency area of the policy. You can also create a new shadow volume policy that moves specific data, and run the policy by using the One Time and Execute Now options in the Frequency area of the policy.

      For information about configuring policies to move data between the primary and secondary areas, see Section 11.0, Creating and Managing Policies for Shadow Volumes.

    • Inventories: Use the detailed inventory reports or customized inventories to move specific files to either area.

      For information about using the volume customized inventory options to move data between the primary and secondary areas, see Section 14.7, Generating a Custom Inventory Report.

  4. (Optional) While the DST pool cluster resource is online, you can delete volume-specific policies as described in Section 11.8, Deleting a Shadow Volume Policy.

    Shadow volume policies that you configured for the DST volume do not run after you remove the shadow volume relationship. The policy information is stored in the /media/nss/<primary_volumename>/._NETWARE/shadow_policy.xml file. Policy information is not automatically removed from the file when you remove the shadow relationship. If you later define a new shadow volume relationship for the primary volume, the policies apply to it.

  5. Continue with Section 15.10.3, Removing the Shadow Definition and NCP/NSS Bindings Exclusion on All Nodes.

15.10.3 Removing the Shadow Definition and NCP/NSS Bindings Exclusion on All Nodes

You must remove the shadow definition for the DST shadow volume pair and the NCP/NSS bindings exclusion for the secondary volume on each node in turn. This requires a restart of NDSD and NCP2NSS on each node, which creates a short service outage for all NSS volumes on the node. You can minimize the impact by cluster migrating the pool cluster resources for the other NSS volumes to other nodes while you are modifying the configuration files on a given node.

  1. Log in as the root user to the node where the primary pool cluster resource is online, then open a terminal console.

  2. Offline the DST pool cluster resource that is managing the clustered shadow volume.

    cluster offline resource_name
    

    This unloads the cluster resource and deactivates the cluster pools and their volumes so that the cluster is not controlling them. Do not bring the primary resource or secondary resource online, and do not locally mount the volumes on any node at this time.

  3. Remove the shadow volume and NCP/NSS bindings exclusion information from each node in the cluster:

    1. Log in to the node as the root user.

    2. In a text editor, open the /etc/opt/novell/ncp2nss.conf file, remove the EXCLUDE_VOLUME line for the secondary volume from the file, then save the file.

      EXCLUDE_VOLUME secondary_volumename
      

      For example:

      EXCLUDE_VOLUME ARCVOL1
      
    3. In a text editor, open the /etc/opt/novell/ncpserv.conf file, remove the SHADOW_VOLUME line for the shadow volume from the file, then save the file.

      SHADOW_VOLUME primary_volumename secondary_volume_path
      

      For example:

      SHADOW_VOLUME VOL1 /media/nss/ARCVOL1
      
    4. Restart the eDirectory daemon by entering the following commands:

      rcndsd stop
      
      rcndsd start
      
    5. Restart the NCP/NSS IPC daemon to synchronize the changes you made to the /etc/opt/novell/ncp2nss.conf file. At the terminal console prompt, enter

      /etc/init.d/ncp2nss restart
      

      If the NCP/NSS IPC daemon restarts successfully, the following messages are displayed in the terminal console:

      Shutting down Novell NCP/NSS IPC daemon...
      Exited
      Starting the Novell NCP/NSS IPC daemon.
      
    6. Repeat these steps for each node in the cluster.

  4. After the shadow and bindings information has been removed from all nodes, continue with Section 15.10.4, Preparing the Primary Pool Cluster Resource for Independent Use.

15.10.4 Preparing the Primary Pool Cluster Resource for Independent Use

In the primary pool cluster resource scripts, remove (or comment out) the lines for the management of the secondary pool, secondary volume, and shadowfs. This allows the pool cluster resource to function independently.

  1. In iManager, select Clusters, then select My Clusters.

  2. Select the name link of the cluster you want to manage.

  3. On the Cluster Manager page, click the name link of the primary cluster resource to view its Cluster Pool Properties page, then click the Scripts tab.

  4. On the Scripts > Load Script page, modify the load script of the primary pool cluster resource:

    1. Remove or comment out the activation command for the secondary pool and the sleep command you added for the pool activation:

      #exit_on_error nss /poolact=ARCPOOL1
      #sleep 10
      
    2. Remove or comment out the ncpcon mount command for the shadow volume:

      #exit on error ncpcon mount VOL1=254,shadowvolume=ARCVOL1
      
    3. Add (or uncomment) a command to mount the NSS volume:

      exit_on_error ncpcon mount <volume_name>=<volume_id>
      

      Replace volume_name with the primary NSS volume name, such as VOL1.

      Replace volume_id with a number that is unique across all nodes in the cluster, such as 254.

      For example:

      exit_on_error ncpcon mount VOL1=254
      
    4. If shadowfs was used, remove or comment out the wait time for shadowfs to start.

      # If shadowfs is used, wait for shadowfs to start 
      #for (( c=1; c<=10; c++ )) do 
      # if [ ! -d /media/shadowfs/VOLUME/._NETWARE ]; then sleep 5; fi 
      #done 
      
    5. Click Apply to save your changes.

      The changes do not take effect until the cluster resource is brought online.

  5. On the Scripts > Unload Script page, modify the unload script of the primary pool cluster resource:

    1. Remove or comment out the deactivation command for the secondary pool:

      #ignore_error nss /pooldeact=ARCPOOL1
      
    2. If shadowfs was used, remove or comment out the fusermount -u command.

      # If shadowfs is used, unload the volume in FUSE 
      #ignore_error fusermount -u /media/shadowfs/VOL1
      
    3. Click Apply to save your changes.

      The changes do not take effect until the cluster resource is brought online.

  6. On the Scripts > Monitor Script page, modify the monitor script of the primary pool cluster resource:

    1. Remove or comment out the status command for the secondary pool:

      # Check the status of the secondary pool
      #exit_on_error status_fs /dev/pool/ARCPOOL1 /opt/novell/nss/mnt/.pools/ARCPOOL1 nsspool
      
    2. Click Apply to save your changes.

      The changes do not take effect until the cluster resource is brought online.

  7. Click OK to return to the Cluster Manager page.

  8. Online the revised pool cluster resource. On the Cluster Manager page, select the check box next to the pool cluster resource, then click Online.

    The resource comes online as an independent pool and volume on a node in the resource’s preferred nodes list.

    If the resource goes comatose instead of coming online, take the resource offline, check the scripts, then try again.

  9. Continue with Section 15.10.5, Preparing the Secondary Pool and Volume for Independent Use.

15.10.5 Preparing the Secondary Pool and Volume for Independent Use

When you defined the clustered DST shadow volume pair, you might have used a clustered pool or a shared-but-not-cluster-enabled pool.

Apply one of the following methods to use the pool and volume independently:

Modifying the Secondary Pool Cluster Resource

If you used a clustered secondary pool cluster resource, ensure that the volume ID is unique across all nodes in the cluster before you bring the pool resource online as an independent pool.

IMPORTANT:If you deleted the secondary pool cluster resource after you merged its information in the primary pool cluster resource scripts, the secondary resource no longer exists. You can cluster-enable the shared-but-not-cluster-enabled pool as described in Cluster-Enabling a Shared Secondary Pool, or you can unshare the pool as described in Unsharing the Secondary Pool to Use It Locally on the Node.

  1. In iManager, select Clusters, then select My Clusters.

  2. Select the name link of the cluster you want to manage.

  3. Go to the secondary resource’s load script and verify that the volume ID is unique for the secondary volume.

    If you have assigned the volume ID to another clustered volume while the secondary resource was unused, the duplicate volume ID will cause the secondary resource to go comatose when you try to bring it online.

    1. On the Cluster Manager page, click the name link of the secondary cluster resource to view its Cluster Pool Properties page, then click the Scripts tab.

    2. On the Scripts > Load Script page, check the volume ID to ensure that it is unique:

      exit_on_error ncpcon mount ARCVOL1=253
      
    3. Click OK to save your changes and return to the Cluster Manager page.

      The changes do not take effect until the cluster resource is brought online.

  4. Online the secondary pool cluster resource. On the Cluster Manager page, select the check box next to the primary pool cluster resource, then click Online.

    The resource comes online as an independent pool and volume on a node in the resource’s preferred nodes list.

    If the resource goes comatose instead of coming online, take the resource offline, check the scripts, then try again.

    If the resource goes online successfully, you are finished.

Cluster-Enabling a Shared Secondary Pool

You can cluster-enable the shared pool and volume as an independent pool cluster resource under the following conditions:

  • If you used a shared-but-not-clustered pool as the secondary pool.

    In this case, the Pool object name and Volume object name contain the name of the node where they were originally created.

  • If you used a cluster-enabled pool as the secondary and deleted the secondary pool cluster resource after you copied its commands into the DST pool cluster resource scripts.

    In this case, the Pool object name and Volume object name contain the cluster name, because the objects were recreated when you cluster-enabled them.

Before you attempt to cluster-enable the shared pool, you must update the Pool object and Volume object in eDirectory to use the hostname of the server where you took the primary pool cluster resource offline.

  1. In NSSMU, update the eDirectory object for the shared pool and volume.

    You can alternatively use the Storage plug-in for iManager to update the eDirectory objects. Select the server where you took the clustered DST pool resource offline. Ensure that you dismount the volume and deactivate the pool after you have updated their objects.

    1. Log in as the root user on the node where you took the primary pool cluster resource offline, then open a terminal console.

    2. Launch NSSMU. At the command prompt, enter

      nssmu
      
    3. Activate the pool and update its eDirectory object to create a Pool object that is named based on the hostname of the current node.

      1. In the NSSMU menu, select Pools, then press Enter.

      2. Select the secondary pool (ARCPOOL1), then press F7 to activate it.

      3. Select the secondary pool, press F4 (Update NDS), then press y (Yes) to confirm that you want to delete the old Pool object and add a new Pool object.

      4. Press Esc to return to the NSSMU menu.

    4. Mount the volume, update its eDirectory object to create a Volume object that is named based on the hostname of the current node, then dismount the volume.

      1. In the NSSMU menu, select Volumes, then press Enter.

      2. Select the secondary volume (ARCVOL1), then press F7 to mount it.

      3. Select the secondary volume, press F4 (Update NDS), then press y (Yes) to confirm that you want to delete the old Volume object and add a new Volume object.

      4. Select the secondary volume, then press F7 to dismount it.

      5. Press Esc to return to the NSSMU menu.

    5. Deactivate the pool.

      1. In the NSSMU menu, select Pools, then press Enter.

      2. Select the secondary pool (ARCPOOL1), then press F7 to deactivate it.

    6. Press Esc twice to exit NSSMU.

  2. In iManager, select Clusters > My Clusters.

  3. Select the name link of the cluster you want to manage.

  4. Cluster-enable the shared pool.

    For detailed instructions, see Cluster-Enabling an Existing NSS Pool and Its Volumes in the OES 11 SP3: Novell Cluster Services for Linux Administration Guide.

    1. Click the Cluster Options tab, then click New.

    2. On the Resource Type page, select Pool, then click Next.

    3. On the Cluster Pool Information page:

      1. Browse to select the secondary pool, such as <hostname>_ARCPOOL1_POOL.

      2. Specify a unique IP address.

      3. Select the NCP, AFP, or CIFS check boxes for the advertising protocols that you want to enable for the volume.

        NCP is selected by default and is required to support authenticated access to data via the Novell Trustee Model. If Novell CIFS or Novell AFP is not installed, selecting its check box has no effect.

      4. If you enable CIFS, verify the default name in the CIFS Server Name field.

        You can modify this name. The name must be unique and can be up to 15 characters, which is a restriction of the CIFS protocol.

      5. Online Resource After Creation is disabled by default. This allows you to review the settings and scripts before you bring the resource online for the first time.

      6. Define Additional Properties is enabled by default. This allows you to set resource policies and preferred nodes before the resource is brought online.

      7. Click Next.

    4. On the Resource Policies page, configure the policies for the start, failover, and failback mode, then click Next.

    5. On the Resource Preferred Nodes page, assign and rank order the preferred nodes to use for the resource, then click Finish.

  5. (Optional) Enable monitoring for the pool cluster resource.

    1. On the Cluster Options page, select the name link for the resource to open its Properties page.

    2. Click the Monitoring tab.

    3. Select Enable Resource Monitoring, set the Polling Interval, Failure Rate, and Failure Action, then click Apply.

    4. Click the Scripts tab, then click Monitor Script.

    5. View the script settings and verify that they are as desired.

    6. If you modify the script, click Apply.

    7. Click OK.

  6. Bring the pool cluster resource online. Click the Cluster Manager tab, select the resource check box, then click Online.

    If the resource goes online successfully, you are finished.

Unsharing the Secondary Pool to Use It Locally on the Node

You can unshare the shared pool and volume and use them locally on the node under the following conditions:

  • If you used a shared-but-not-clustered pool as the secondary pool.

    In this case, the Pool object name and Volume object name contain the name of the node where they were originally created.

  • If you used a cluster-enabled pool as the secondary and deleted the secondary pool cluster resource after you copied its commands into the DST pool cluster resource scripts.

    In this case, the Pool object name and Volume object name contain the cluster name, because the objects were recreated when you cluster-enabled them.

Before you attempt to use pool and volume locally, you must update the Pool object and Volume object in eDirectory to use the hostname of the server where you took the primary pool cluster resource offline.

To mount the secondary volume as an independent local volume:

  1. Log in as the root user on the node where you took the primary pool cluster resource offline, then open a terminal console.

  2. Launch NSSMU. At the command prompt, enter

    nssmu
    
  3. In NSSMU, update the eDirectory object for the shared pool and volume.

    You can alternatively use the Storage plug-in for iManager to update the eDirectory objects. Select the server where you took the clustered DST pool resource offline.

    1. Activate the pool and update its eDirectory object to create a Pool object that is named based on the hostname of the current node.

      1. In the NSSMU menu, select Pools, then press Enter.

      2. Select the secondary pool (ARCPOOL1), then press F7 to activate it.

      3. Select the secondary pool, press F4 (Update NDS), then press y (Yes) to confirm that you want to delete the old Pool object and add a new Pool object.

      4. Press Esc to return to the NSSMU menu.

    2. Mount the volume, then update its eDirectory object to create a Volume object that is named based on the hostname of the current node.

      1. In the NSSMU menu, select Volumes, then press Enter.

      2. Select the secondary volume (ARCVOL1), then press F7 to mount it.

      3. Select the secondary volume, press F4 (Update NDS), then press y (Yes) to confirm that you want to delete the old Volume object and add a new Volume object.

      4. Press Esc to return to the NSSMU menu.

  4. In the NSSMU menu, select Devices, then press Enter.

  5. Disable sharing for the device. Select the device, press F6 to unshare the device, then press y (Yes) to confirm.

    If NSSMU does not allow you to unshare the device, you can use the SAN management software to ensure that the device is allocated only to the current server, and then try again.

    Before you unshare the device, ensure that the device contains only the pool that you are changing to local use. It should not contain other shared pools or SBD partitions.

  6. In the NSSMU menu, select Pools, then press Enter.

  7. Select the pool and verify that it is unshared.

  8. Press Esc twice to exit NSSMU.