2.1 eDirectory Cluster Objects

The NCS installation process creates a set of eDirectory objects to represent all cluster configurations and resources. These eDirectory objects store information about each cluster, including its properties, configuration attributes, policies, node addresses, and enabled resources. As shown in Figure 2-1, the eDirectory objects created for NCS include the following:

Figure 2-1 Relationship Between Cluster Services NLMs and Cluster Schema

2.1.1 Cluster Container

An eDirectory cluster container object represents each cluster of nodes on the network. The cluster container facilitates the configuration of the cluster's configuration parameters. These are cluster-wide settings that typically need to be set only once. The main configurable settings for the cluster container objects are as follows:

  • Quorum Trigger: Specifies how many nodes must be active within the cluster before any cluster resources start to load. This number is also known as the quorum membership. Generally, this number is set greater than one so that all cluster resources do not automatically load on the first server that is brought up in the cluster. A configurable time out setting is associated with the quorum trigger. If the time-out occurs before full quorum membership is reached, cluster resources begin to load on those nodes that have already joined the cluster even though full quorum membership has not yet been reached.

  • Heartbeat Settings: Each node in the cluster must transmit a signal to the master node to let the master know that the node is still active. This signal is called the heartbeat. The heartbeat setting determines how often each node must transmit this signal. The tolerance setting determines how long the master waits for a heartbeat before assuming that the node has gone down. It also determines when the master initiates the group membership protocols to verify for certain whether that node is down. Similarly, the master watchdog and slave watchdog settings specify how often the master node transmits its alive status to all the other nodes. They also determine how long the nodes should wait for that signal before initiating the group membership protocols to verify that the master has stopped running.

  • Cluster Management Port: Enables network administrators to specify which TCP/IP port number the management tools use to connect to the cluster. When the cluster is created, the port number is automatically assigned. The cluster port number does not need to be changed unless there is a conflict created by another resource using the same port number. If necessary, you change the cluster port number using the ConsoleOne® properties page or by writing a program to change the eDirectory attribute of the cluster container object that contains the port number.

2.1.2 Cluster Node

The Cluster Node object stores the node number and the physical TCP/IP address of each node in the cluster. This eDirectory object is used by NCS to create an alias for the NCP™ server that it represents.

2.1.3 Cluster Resource

A cluster resource must be created for every resource or application that runs on a node within a cluster. Cluster resources can include Web servers, e-mail servers, databases, and any other server-based applications or services that always need to be available to users.

The cluster resource object holds all configuration information for all of these applications and services. It also includes the preferred node that indicates where each individual resource should run, as well as the nodes of the resources should a failover occur. Here are some of the main configurable properties of a cluster resource:

  • Load Scripts: Specify the commands needed to activate a cluster resource, such as the commands to load an application or to mount a volume on a server. The load scripts follow the same format as those used in an Novell Command File (identified by the extension, .NCF) that executes on a NetWare® server console. Load scripts are required for every cluster resource.

  • Unload Scripts: Specify how a cluster resource should terminate. Although all resources or applications do not necessarily require unload scripts, these scripts can ensure that a resource properly unloads before loading onto another node during failback or manual migration.

  • Failover and Failback Modes: Failover and failback of cluster resources can be configured to occur manually or automatically. When a node in the cluster fails, all cluster resources set to automatic automatically migrate to surviving nodes in the cluster. Setting a cluster resource's failback mode to automatic ensures that it moves back to its preferred node once the preferred node rejoins the cluster. For both failover and failback, a manual setting requires human intervention, which allows a network administrator to manually control how resources are migrated.

2.1.4 Cluster Template

To eliminate the tedious process of repeatedly configuring identical properties between similar cluster resources, NCS allows you to create and use resource templates when creating similar cluster resources.

2.1.5 Volume Resource

In a shared disk environment under Novell Cluster Services, NSS volumes can be either cluster-enabled volumes or regular volumes. Both volume types can failover to another server node when a node failure occurs, but the method for doing so differs between the two volume types:

Cluster-Enabled Shared Disk Volumes

To enable users to retain drive mappings to a volume in the event of node failure, that volume must be cluster-enabled to facilitate automatic, transparent NetWare client reconnect. Additionally, cluster-enabled volumes must be used for server applications that use a specific server name in their operations. For example, applications that reference data locations using the server name as part of the directory path.

When a volume is cluster-enabled, three eDirectory objects are created:

  • Cluster Volume—Represents the cluster-enabled volume.

  • Cluster Volume Resource—Maintains the load and unload scripts for the cluster-enabled volume. It is similar to cluster resource objects that represent server applications.

  • Cluster Virtual Server—Represents the virtual NCP server for the cluster-enabled volume.

Cluster-enabled volumes have properties similar to other cluster resources. Network administrators can specify the nodes eligible to mount a cluster-enabled volume, as well as its preferred node. The cluster-enabled volume also must be configured with its own load and unload scripts. Additionally, network administrators must set the failover and failback modes for the volume’s resources.

Cluster-enabled volumes are not permanently bound (or have any hard ties) to any one physical server. Instead, they are bound to a virtual NCP server, which acts as a proxy for the hosting server. When node failures occur the cluster-enabled volume automatically remounts on a new node in the cluster and the virtual NCP server then acts as proxy for that new server node.

Because users and applications see the relationship only between the cluster-enabled volume and the virtual NCP server (and not a physical server) they are shielded from detecting node failures. When node failures occur, cluster-enabled volumes can provide uninterrupted service to users and applications.

Each cluster-enabled volume has a dedicated virtual NCP server and secondary IP address. Rather than connecting to a physical NCP server through an IP address, clients connect to the virtual NCP server through its secondary IP address. This allows users to maintain their connection to the virtual NCP server and its cluster-enabled volume as it moves from server node to server node in response to server failures.

With cluster-enabled volumes, users can map drives using the name of the cluster-enabled volume and its virtual NCP server. When volumes failover to different nodes in the cluster, this failure allows drive mappings to remain valid and is critical for server applications that require clients to maintain a constant connection to a specified network volume and server. This ability also provides location transparency for client connections to the cluster and its shared volumes.

Naming Conventions

Because a cluster-enabled volume does not have any lasting ties to any specific physical server node in the cluster, the volume naming references are also not associated with a physical server. Instead, the volume references the name of the cluster where it resides.

The name of the cluster-enabled volume is made up of its actual physical volume name and the actual name of the cluster. For example, a volume with a physical name of VOL1 in a cluster named CLUSTER2 becomes CLUSTER2_VOL1. Examples of other cluster-enabled volume names are CLUSTER2_VOL2 or CLUSTER2_VOL3.

To ensure that no naming conflicts arise, each volume in the cluster must have a unique physical volume name. For example, if physical Server-A hosts a cluster-enabled volume with the physical name of VOL1, no other volume can use that same physical name (even if it is initially hosted on a different server). This type of naming convention links the cluster-enabled volume and its name to the cluster, rather than a specific network server.

Each cluster-enabled volume is tied to a virtual NCP server whose name is derived by adding the constant name SERVER to the end of the name of the cluster-enabled volume. For example, the name of the virtual NCP server for the cluster-enabled volume called CLUSTER2_VOL1 is CLUSTER2_VOL1_SERVER. A resulting drive mapping might look similar to the following:

 F:=CLUSTER2_VOL1_SERVER/VOL1:
 

Shared Disk Volumes

If transparent NetWare (NCP) client reconnect is not required and if the applications on a volume server do not need a fixed server name for its operations, cluster-enabled shared disk volumes are not needed. When a node failure occurs with shared disk volumes, the Cluster Resource Manager (cluster module) remounts the volume on another server and executes whatever load script commands are necessary to continue functioning on the new node. The only difference between cluster-enabled and non cluster-enabled shared disk volumes is that cluster-enabled volumes have a dedicated host virtual NCP server that enables NetWare client reconnect.