5.11 Adding a Single Node That Was Previously in a Cluster

Use the procedure in this section to completely rebuild a cluster node that has been destroyed or to reinstall a cluster node with a newer release of OES. The name and IP address of the node must be same as that of the node it replaces.

IMPORTANT:

  • While the node is being rebuild it must not have access to the shared storage. Reinstalling a server with access to shared storage can result in complete loss of shared resources.

  • When a node is removed temporarily from the cluster for rebuilding it, you must not extend the cluster with an additional node to avoid the new node getting the node number of the temporarily removed node.

Prerequisites

  • Before you begin replacing the cluster node, it is recommended to create a cluster configuration report and note the resources that are configured to use the node being replaced. For information on creating a cluster configuration report, see Section 9.9, Generating a Cluster Configuration Report.

  • You also need to note if there are any eDirectory replicas stored on this node.

    In iManager > View Objects > Browse, browse to the NCP server context representing the cluster node. Select the NCP Server object and perform the task Replica View. For information on viewing a partition’s replica, see Viewing a Partition’s Replicas in the NetIQ eDirectory Administration Guide.

  • In preparation of the reinstallation, all eDirectory objects representing the node being replaced and its services must be removed from eDirectory.

    In iManager > View Objects > Search, enter <HostName> as the search pattern. This will list all eDirectory objects representing the node. First delete the NCP Server object only. This can take a couple of minutes.

    When the NCP Server object is deleted, click the Multiple Select link and repeat the search. Delete all the remaining objects related to the node being reinstalled including the object of class Unknown representing the SYS volume and the cluster node object.

    If the deleted node is still known to the active cluster nodes, force the cluster to recognize that the node has been permanently removed. See Step 5 of Section 5.10, Removing a Node from a Cluster.

  • Verify that the deletions have been processed properly. Verify that the deleted node is no longer known to any of the active cluster nodes by using the command cluster view.

Replacing the Cluster Node

  1. Install OES with the same patterns that are installed on the other cluster nodes including OES Cluster Services and apply the latest patches. Ensure that you use the same node name and IP address of the node that was previously in the cluster.

  2. Add any eDirectory replica that was stored on the server before it was removed from the tree.

  3. Deploy the same multipath.conf and bindings file (if any) as used by the other cluster nodes. Restore access to the shared storage and reboot the node being rebuild. When the node is up, use nlvm list devices to verify that it has access to the same devices as the active cluster nodes.

  4. Verify that the reinstalled server has access to the SBD of the cluster it will be added to by using sbdutil -f -s -n <ClusterName>.

  5. Start YaST oes-install. On the Micro Focus Open Enterprise Server Configuration page under OES Cluster Services, enable the configuration, then click the OES Cluster Services link to open the OES Cluster Services Configuration wizard.

    For information see, Section 5.5.4, Accessing the OES Cluster Services Configuration Page in YaST.

  6. Configure the replacement node for the existing cluster. Enter or browse to the cluster name. Leave the other settings at their default value and click Next. If your cluster node has multiple NICs ensure to select the right IP address on the third page of the cluster configuration.

    See Section 5.5.6, Adding a Node to an Existing Cluster.

    After you restart OES Cluster Services, the replacement server joins the cluster with its old identity.