13.12 Deleting a Clustered LVM Volume (Created in NSSMU or NLVM)

We strongly recommend that you delete a cluster-enabled LVM volume only from the master node in the cluster. This allows the cluster information to be automatically updated.

The procedures in this section assume that you created the clustered LVM volume in NSSMU or NLVM. There are default naming conventions applied by these tools that might not apply to LVM volume groups that you created and cluster-enabled by using native LVM tools and the Generic File System template.

WARNING:Deleting an LVM volume destroys all data on it.

NSSMU and the nlvm delete linux volume <volume_name> command delete the cluster-enabled LVM volume as well as the related objects in eDirectory:

  • Linux LVM volume group and logical volume from the file system

  • Cluster Resource object for the LVM resource

  • If the LVM volume is NCP-enabled:

    • Volume object for the LVM volume

    • Virtual server for the LVM resource (NCS:NCP Server object)

When the LVM volume resides on the master node, the cluster information is automatically updated.

When the LVM volume resides on a non-master node, additional steps are required to update the cluster information. A cluster restart might be needed to force the information to be updated.

Use the following procedures to delete a cluster-enabled LVM volume:

13.12.1 Deleting a Cluster-Enabled LVM Volume on the Master Node

  1. If the LVM resource is on a non-master node in the cluster, migrate it to the master node. As the root user, open a terminal console, then enter

    cluster migrate <resource_name> <master_node_name>
    

    To migrate the resource, the master node must be in the resource’s preferred nodes list.

  2. Use the cluster status command to check the resource status. If the resource is online or comatose, take it offline.

    As the root user, enter

    cluster offline <resource_name>
    

    Use the cluster status command to verify that the resource has a status of Offline before you continue.

  3. Delete the LVM volume on the master node by using NSSMU.

    You can alternatively use the nlvm delete linux volume <lx_volume_name> command.

    1. In NSSMU, select Linux Volumes, then press Enter.

    2. Select the unmounted LVM volume, then press Delete.

    3. Select OK to confirm, then press Enter.

  4. In the Tree View in iManager, browse the objects to verify that the following objects were deleted:

    • LVM resource object (from the Cluster container)

    • If the LVM volume was NCP-enabled:

      • Volume object for the LVM volume

      • Virtual server for the LVM resource (NCS:NCP Server object)

  5. Re-initialize the device that contained the LVM volume.

    When NLVM or NSSMU removes the Linux LVM volume group, it leaves the device in an uninitialized state.

    1. In NSSMU, select Devices, then press Enter.

    2. Select the device, then press F3 (Initialize).

    3. Press y (Yes) to confirm.

    4. Select the partitioning scheme as DOS or GPT, then press Enter.

  6. (Optional) Use a third-party SAN management tool to assign the device to only one desired server.

13.12.2 Deleting a Cluster-Enabled LVM Volume on a Non-Master Node

  1. Log in as the root user to the non-master node where the cluster resource currently resides, then open a terminal console.

  2. Use the cluster status command to check the resource status. If the resource is online or comatose, take it offline by using one of the following methods:

    cluster offline <resource_name>
    

    Use the cluster status command to verify that the resource has a status of Offline before you continue.

  3. At the command prompt on the non-master node, enter

    /opt/novell/ncs/bin/ncs-configd.py -init
    
  4. Look at the file /var/opt/novell/ncs/resource-priority.conf to verify that it has the same information (REVISION and NUMRESOURCES) as the file on the master node.

  5. Delete the LVM volume on the master node by using NSSMU.

    You can alternatively use the nlvm delete linux volume <lx_volume_name> command.

    1. In NSSMU, select Linux Volumes, then press Enter.

    2. Select the unmounted LVM volume, then press Delete.

    3. Select OK to confirm, then press Enter.

  6. In the Tree View in iManager, browse the objects to verify that the following objects were deleted:

    • LVM resource object (from the Cluster container)

    • If the LVM volume was NCP-enabled:

      • Volume object for the LVM volume

      • Virtual server for the LVM resource (NCS:NCP Server object)

  7. Re-initialize the device that contained the LVM volume.

    When NLVM or NSSMU removes the Linux LVM volume group, it leaves the device in an uninitialized state.

    1. In NSSMU, select Devices, then press Enter.

    2. Select the device, then press F3 (Initialize).

    3. Press y (Yes) to confirm.

    4. Select the partitioning scheme as DOS or GPT, then press Enter.

  8. On the master node, log in as the root user, open a terminal console, then enter

    /opt/novell/ncs/bin/ncs-configd.py -init
    
  9. Look at the file /var/opt/novell/ncs/resource-priority.conf to verify that it has the same information (REVISION and NUMRESOURCES) as that of the non-master node where you deleted the cluster resource.

  10. In iManager, select Clusters > My Clusters, select the cluster, then select the Cluster Options tab.

  11. Click Properties, select the Priorities tab, then click Apply on the Priorities page.

  12. At a command prompt, enter

    cluster view
    

    The cluster view should be consistent.

  13. Look at the file /var/opt/novell/ncs/resource-priority.conf on the master node to verify that the revision number increased.

    If the revision number increased, skip Step 14.

    If the deleted resource is the only one in the cluster, the priority won’t force the update. A phantom resource might appear in the interface. You need to restart Cluster Services to force the update, which also removes the phantom resource.

  14. If the revision number did not automatically update in the previous steps, restart Novell Cluster Services by entering the following on one node in the cluster:

    cluster restart [seconds]
    

    For seconds, specify a value of 60 seconds or more.

    For example:

    cluster restart 120
    
  15. (Optional) Use a third-party SAN management tool to assign the devices to only the desired server.