Novell Business Continuity Clustering 2.0 for OES 11 SP1 was released in July 2013. For purchasing information, see the Novell Business Continuity Clustering product page.
Updates were made to the following sections. The changes are explained below.
Section E.2.1, Clusters Plug-In Changes for Novell iManager 2.7.5
Section E.2.2, Comparing Novell Cluster Services for Linux and NetWare
Section E.2.3, Comparing Resources Support for Linux and NetWare
Section E.2.5, Configuring and Managing Cluster Resources for OES Services
Section E.2.6, Configuring and Managing Cluster Resources for Shared LVM Volume Groups
Section E.2.7, Configuring and Managing Cluster Resources for Shared NSS Pools and Volumes
Section E.2.8, Configuring Cluster Policies, Protocols, and Properties
Section E.2.10, Installing, Configuring, and Repairing Novell Cluster Services
Section E.2.15, Upgrading Clusters from OES 2 SP3 to OES 11x
Section E.2.17, What’s New or Changed in Novell Cluster Services
The Section D.0, Clusters Plug-In Changes for Novell iManager 2.7.5 section is new. It describes the changes to the Clusters plug-in for Novell iManager 2.7.5, and compares how to perform common tasks in the old and new versions of the plug-in.
This information was relocated to the OES 11 SP3: Novell Cluster Services NetWare to Linux Conversion Guide.
This information was relocated to the OES 11 SP3: Novell Cluster Services NetWare to Linux Conversion Guide.
Location |
Change |
---|---|
Section 10.2, Setting Up a Personalized List of Resources to Manage |
Your personalized My Resources list and display settings are saved on the iManager server. |
If scrolling is needed to view the complete list, you can use the Refresh option to set a longer refresh rate for the page in order to allow enough time to view the status for all items. |
|
Section 10.2.5, Updating the My Resources List after Renaming or Deleting a Resource |
This section is new. |
Section 10.7.3, Monitoring Services that Are Critical to a Resource |
Beginning in OES 11 SP2, NCS provides the ability to monitor the status of the eDirectory daemon (ndsd) at the NCS level. It is disabled by default. The monitoring can be set independently on each node. It runs whenever NCS is running on the node. See Section 8.8, Configuring NCS to Monitor the eDirectory Daemon (ndsd). |
Added links to sample monitor scripts for clustered LVM volume groups that were created in NSSMU or NLVM. |
|
Section 10.5, Configuring a Load Script for a Cluster Resource Section 10.6, Configuring an Unload Script for a Cluster Resource Section 10.7, Enabling Monitoring and Configuring the Monitor Script |
Apply the updated scripts by taking the resource offline and then bringing it online on the same node. Alternatively, after the updated scripts have been synchronized from eDirectory to the source and destination nodes, the updated scripts are used automatically on system failover or cluster migration. For more information, see Section 10.8, Applying Updated Resource Scripts by Offline/Offline, Failover, and Migration. |
Section 10.8, Applying Updated Resource Scripts by Offline/Offline, Failover, and Migration |
This section is new. |
Section 10.1.2, Using Parameter Values with Spaces in a Cluster Script |
In cluster scripts, if a parameter value contains spaces, you must enclose the value with both double-quotes (") and a single-quotes ('). For example, "'quoted text'". |
Section 10.1.3, Using Double Quotation Marks in a Cluster Script |
This section is new. |
Section 10.10, Configuring Preferred Nodes and Node Failover Order for a Resource |
When you change the failover order of nodes in a resource’s Preferred Nodes list, the change takes effect immediately when you click OK or Apply if the resource is running. Otherwise, it takes effect the next time the resource is brought online. IMPORTANT:Ensure that you prepare the node for the services in the resource before you migrate or fail over the resource to it. |
Step 8 in Section 10.10, Configuring Preferred Nodes and Node Failover Order for a Resource |
If you remove nodes from the Assigned list in the Edit view, the removed nodes are automatically added to the Unassigned list when you save. |
The sections noted were moved from this chapter to Section 8.0, Configuring Cluster Policies, Protocols, and Properties |
Section 8.7, Configuring Cascade Failover Prevention Section 8.9, Configuring the Cluster Node Reboot Behavior Section 8.10, Configuring STONITH Section 8.11, Viewing or Modifying the NCS Proxy User Assignment in the NCS Management Group |
This section was revised for clarity. NSS tools have been modified so that it is not necessary to stop NCS in order to disable the Shareable for Clustering setting. The device should not contain partitions that are used by shared pools, clustered pools, or SBDs. |
Location |
Change |
---|---|
This table provides links to information about creating resources for different file system solutions. |
|
Table 11-3, Clustering Linux Services with Novell Cluster Services |
This table provides links to information about creating resources for some Linux services. |
Location |
Change |
---|---|
Section 13.3, Configuring an LVM Volume Group Cluster Resource with NSS Management Tools |
If you enable NCP for the volume, this name must comply with the Naming Conventions for NCP Volumes. Lowercase letters are automatically changed to uppercase for the NCP volume name. |
Section 13.3, Configuring an LVM Volume Group Cluster Resource with NSS Management Tools |
Ensure that you log in to the master node to create clustered LVM volume groups with NSSMU and NLVM commands. Typically, the volume creation takes less than 10 seconds. However, if you have a large tree or if the server does not hold an eDirectory replica, the create time can take up to 3 minutes. When an LVM logical volume group is clustered, CLVM manages the share state, not the device. NSSMU reports the device as Not Shareable for Clustering. NSSMU reports the clustered volume is Shareable for Clustering. |
Section 13.4.2, Creating a Generic File System Cluster Resource for an LVM Volume Group |
Ensure that the SAN device is attached to all of the nodes in the cluster. Ensure that the existing LVM volume you want to cluster-enable is deactive on all nodes in the cluster. Be mindful that Linux path names are case sensitive. The mount path must already exist. If the resource goes comatose, typical issues are that the target LVM volume is still active locally on a node, the path names were not entered as case sensitive, the IP address is not unique in the network, or the mount path does not exist. |
Section 13.11, Deleting a Clustered LVM Volume Group and Logical Volume |
This section is new. |
Location |
Change |
---|---|
Section 12.0, Configuring and Managing Cluster Resources for Shared NSS Pools and Volumes |
All sample load scripts and unload scripts for NSS pool cluster resources were updated to reflect the new default sequence of commands for OES 11 SP1. Newly created pool cluster resources use the following sequence for commands in a load script. The order is reversed in an unload script.
|
If the objects are in different contexts, you might need to cluster migrate the pool cluster resource back to the context where the pool was created in order to modify the pool or volume, or to perform other tasks like setting up Distributed File Services junctions or home directories. You receive an eDirectory error if the operation cannot find the information that it needs in the same context. |
|
WARNING:In EMC PowerPath environments, do not use the rescan-scsi-bus.sh utility provided with the operating system or the HBA vendor scripts for scanning the SCSI buses. To avoid potential file system corruption, EMC requires that you follow the procedure provided in the vendor documentation for EMC PowerPath for Linux. |
|
Section 12.4.1, Creating a Cluster-Enabled Pool and Volume with iManager Section 12.4.2, Creating a Cluster-Enabled Pool and Volume with NSSMU |
Ensure that the SAN device is attached to all of the nodes in the cluster. Ensure that you log in to the master node to create clustered pools and volumes. Typically, the pool creation takes less than a minute. The volume creation takes less than 10 seconds. However, if you have a large tree or if the server does not hold an eDirectory replica, the create time can take up to 3 minutes. |
In the ncpcon mount command, place the /opt switch before the volume information. ncpcon mount /opt=ns=long <VOLUMENAME>=<VOLUMEID> |
|
Section 12.10, Adding NFS Export for a Clustered Pool Resource |
Before you use the exportfs(8) command in resource scripts, ensure that you install the nfs-kernel-server package on all of the cluster nodes in the resource’s preferred nodes list. This contains the support utilities for the kernel nfsd daemon. It is not installed by default. You can use the YaST > Software > Software Management tool to find and install the package. |
Add the following command in the load script before the exportfs command: # Start the NFS service exit_on_error rcnfsserver start |
|
This section was updated for clarity. |
|
Section 12.12.3, Renaming a Shared Pool and Its Resource Objects |
This section is new. |
Section 12.12.4, Renaming a Shared Pool in a DST Cluster Resource |
This section is new. |
This section is new. |
|
This section is new. |
|
This section is new. |
|
This section is new. |
|
This section was updated for clarity and moved to the NSS chapter. NSS tools have been modified so that it is not necessary to stop NCS in order to disable the Shareable for Clustering setting. The device should not contain partitions that are used by shared pools, clustered pools, or SBDs. |
Location |
Change |
---|---|
This section is new. |
|
This section is new. |
|
Section 8.2, Setting Up a Personalized List of Clusters to Manage |
Your personalized My Clusters list and display settings are saved on the iManager server. |
If scrolling is needed to view the complete list, you can use the Refresh option to set a longer refresh rate for the page in order to allow enough time to view the status for all items. |
|
If the master IP address is having a problem, the My Clusters page of the Clusters plug-in allows you to use the IP address of a specified node (preferably the master node) to manage the cluster. |
|
Cascade failover prevention is supported in OES 2 SP3 and later. You can create a configuration file in the /etc/modprobe.d file that contains the flag setting to enable and disable the Cascade Failover Prevention function for the server. |
|
Section 8.7.1, Understanding the Cascade Failover Prevention Quarantine |
A resource might be quarantined if it is likely responsible for several consecutive server failures, unrelated to interference from failures of other resources. |
Section 8.8, Configuring NCS to Monitor the eDirectory Daemon (ndsd) |
This section is new. |
Section 8.12, Viewing or Modifying the Cluster Master IP Address or Port |
If the master IP address is having a problem, the My Clusters page of the Clusters plug-in allows you to use the IP address of a specified node (preferably the master node) to manage the cluster while you fix the master IP address. |
Section 8.12, Viewing or Modifying the Cluster Master IP Address or Port |
A cluster restart is required before iManager can continue to manage the cluster with its new master IP address. |
This section was relocated here from the “Managing a Cluster” section. |
|
This section was modified to also address Linux volume cluster resources. |
|
This section is new. |
Location |
Change |
---|---|
You can use the Linux rescan-scsi-bus.sh utility to scan for new devices. WARNING:In EMC PowerPath environments, do not use the rescan-scsi-bus.sh utility provided with the operating system or the HBA vendor scripts for scanning the SCSI buses. To avoid potential file system corruption, EMC requires that you follow the procedure provided in the vendor documentation for EMC PowerPath for Linux. |
|
This section is new. |
|
This section is new. |
|
Section A.10, ncs_resource_scripts.pl Script (Modifying Resource Scripts Outside of iManager) |
This section is new. |
Location |
Change |
---|---|
Section 5.7.2, Installing the Clusters Plug-In Step 5 in Section 5.7.3, Uninstalling and Reinstalling the Clusters Plug-In |
Verify that the /var/opt/novell/iManager/nps/WEB-INF/lib directory contains the class loader file commons-lang-2.6.jar (or later version), then remove all earlier versions of the file. You must remove the earlier versions (such as commons-lang.jar or commons-lang-2.4.jar) to ensure that iManager loads the 2.6 or later version of the class loader file. |
Section 5.9, Removing the NCS Configuration Information from a Node |
This section is new. |
Location |
Change |
---|---|
Novell Cluster Services is a kernel-space application. As with any kernel-space application, unloading it is a best effort and is not guaranteed to stop the modules. If Novell Cluster Services is busy providing services from the kernel when you attempt to stop it, the process might not stop, and a reboot might be required. |
|
If the master IP address is having a problem, the My Clusters page of the Clusters plug-in allows you to use the IP address of a specified node (preferably the master node) to manage the cluster. |
|
If scrolling is needed to view the complete list, you can use the Refresh option to set a longer refresh rate for the page in order to allow enough time to view the status for all items. |
|
This section is new. The Time Interval replaces the Start Date and Time field on the Event Log Filter page. |
|
Section 9.7, Viewing the Cluster Event Log at the Command Line |
This section is new. |
Section 9.8, Viewing Summaries of Failed or Incomplete Events from the *.out Log Files |
This section is new. |
Section 9.14, Joining a Node to the Cluster (Rejoining a Cluster) |
While the node is not active in the cluster, any changes in storage objects (such as creating, deleting, or expanding clustered pools) are not updated automatically for the node. After you rejoin the node to the cluster, use the nlvm rescan command to recognize the storage changes. |
Location |
Change |
---|---|
In addition to Ethernet NICs, Novell Cluster Services supports VLAN on NIC bonding in OES 11 SP2 and later. No modifications to scripts are required. You can use ethx or vlanx interfaces in any combination for nodes in a cluster. |
|
The SLP refresh interval is now based on the eDirectory advertise-life-time setting. |
|
Novell Business Continuity Clustering 2.0 for OES 11 SP1 is supported in the Clusters plug-in for OES 11 SP1 with the latest patches applied. |
|
All devices that contribute space to a clustered pool must be able to fail over with the pool cluster resource. You must use the device exclusively for the clustered pool; do not use space on it for other pools or for Linux volumes. A device must be marked as Shareable for Clustering before you can use it to create or expand a clustered pool. |
|
NLVM and other NSS management tools need the SBD to detect if a node is a member of the cluster and to get exclusive locks on physically shared storage. Because an SBD partition ends on a cylinder boundary, the partition size might be slightly smaller than the size you specify. When you use an entire device for the SBD partition, you can use the ‘max’ option as the size, and let the software determine the size of the partition. |
|
Any NSS software RAID0/5 device that is used for a clustered pool must contribute space exclusively to that pool; it cannot be used for other pools. This allows the device to fail over between nodes with the pool cluster resource. Ensure that its component devices are marked as Shareable for Clustering before you use a RAID0/5 device to create or expand a clustered pool |
Location |
Change |
---|---|
Added links to procedures to view the Event Log. Added information about the DotOutParser utility. |
Location |
Change |
---|---|
|
This section is was removed. Cascading failover is supported in OES 11 and later. See Section 8.7, Configuring Cascade Failover Prevention. |
|
This section was removed. It is not an issue in OES 11 and later. |
This section is new. |
|
This section is new. |
|
Section 16.10, iManager: Problems with Clusters Plug-In After Update for Novell iManager 2.7.5 |
This section is new. |
Section 16.18, Error NFS Stale Handle Reported on Host when Cluster Failover Occurs |
This section is new. |
Section 16.19, Error 20897 - This node is not a cluster member |
This section is new. |
This section is new. |
Location |
Change |
---|---|
Section 7.0, Upgrading Clusters from OES 2 SP3 to OES 11 SPx |
IMPORTANT:You can upgrade a cluster from OES 2 SP3 to OES 11 or from OES 2 SP3 to OES 11 SP1. The same release version of OES 11 must be installed and running on each new or upgraded node in the cluster. The procedures in this section refer to OES 11x. A specific version of OES 11 is mentioned only if the procedure applies only to that version of the platform. |
Section 7.4, Adding OES 11x Nodes to an OES 2 SP3 Cluster (Rolling Cluster Upgrade) |
During a rolling cluster upgrade, one OES 11x server is added at a time, either by adding a new node or upgrading an OES 2 SP3 node:
|
Section 7.6, Upgrading a Cluster with DST Resources from OES 2 SP3 to OES 11x |
If you replace an OES 2 SP3 node with an OES 11x node instead of using an in-place upgrade to OES 11x, you must manually configure the global policies and move the All Shadowed Volumes policies data to the OES 11x node. |
Location |
Change |
---|---|
Section 6.2, Upgrading Nodes in the Cluster (Rolling Cluster Upgrade) |
If autostart is configured for the node, to prevent NCS from automatically loading during upgrade, you need to run "chkconfig -s novell-ncs off" after NCS has been stopped (before the upgrade), and "chkconfig -s novell-ncs on" after NCS has been started (after the upgrade). |
This section is new. The Section 6.3.1, Updating the Clusters Plug-in and Role-Based Services in iManager was moved to this section. |
|
Section 6.3.2, DHCP Cluster Resource: Path Change for dhcpd.pid |
This issue is new. |
Location |
Change |
---|---|
This section is new. |