1.6 Terminology

Before you start working with a Novell Cluster Services cluster, you should be familiar with the terms described in this section:

1.6.1 The Cluster

A cluster is a group 2 to 32 servers configured with Novell Cluster Services so that data storage locations and applications can transfer from one server to another to provide high availability to users.

Cluster IP Address

The unique static IP address for the cluster.

Server IP Address

Each server in the cluster has its own unique static IP address.

Master Node

The first server that comes up in an cluster is assigned the cluster IP address and becomes the master node. The master node monitors the health of the cluster nodes. It also synchronizes updates about the cluster to eDirectory. If the master node fails, Cluster Services migrates the cluster IP address to another server in the cluster, and that server becomes the master node. For information about how a new master is determined, see Section C.0, Electing a Master Node.

Slave Node

Any member node in the cluster that is not currently acting as the master node.

Split-Brain Detector (SBD)

A small shared storage device where data is stored to help detect and prevent a split-brain situation from occurring in the cluster. If you use shared storage in the cluster, you must create an SBD for the cluster.

A split brain is a situation where the links between the nodes fail, but the nodes are still running. Without an SBD, each node thinks that the other nodes are dead, and that it should take over the resources in the cluster. Each node independently attempts to load the applications and access the data, because it does not know the other nodes are doing the same thing. Data corruption can occur. An SBD’s job is to detect the split-brain situation, and allow only one node to take over the cluster operations.

Shared Storage

Disks or LUNs attached to nodes in the cluster via SCSI, Fibre Channel, or iSCSI fabric. Only devices that are marked as shareable for clustering can be cluster-enabled.

1.6.2 Cluster Resources

A cluster resource is a single, logical unit of related storage, application, or service elements that can be failed over together between nodes in the cluster. The resource can be brought online or taken offline one node at a time.

Resource IP Address

Each cluster resource in the cluster has its own unique static IP address.

NCS Virtual Server

An abstraction of a cluster resource that provides location independent access for users to the service or data. The user is not aware of which node is actually hosting the resource. Each cluster resource has a virtual server identity based on its resource IP address. A name for the virtual server can be bound to the resource IP address.

Resource Templates

A resource template contains the default load, unload, and monitor scripts and default settings for service or file system cluster resources. Resource templates are available for the following OES services and file systems:

  • Novell Archive and Version Services (removed in OES 11 SP2)

  • Novell DHCP

  • Novell DNS

  • Generic file system (for LVM-based Linux POSIX volumes)

  • NSS file system (for NSS pool resources)

  • Generic IP service

  • Novell iFolder 3.x

  • Novell iPrint

  • MySQL

  • Novell Samba

Personalized templates can also be created. See Section 10.3, Using Cluster Resource Templates.

Service Cluster Resource

An application or OES service that has been cluster-enabled. The application or service is installed on all nodes in the cluster where the resource can be failed over. The cluster resource includes scripts for loading, unloading, and monitoring. The resource can also contain the configuration information for the application or service.

Pool Cluster Resource

A cluster-enabled Novell Storage Services pool. Typically, the shared pool contains only one NSS volume. The file system must be installed on all nodes in the cluster where the resource can be failed over. The NSS volume is bound to an NCS Virtual Server object (NCS:NCP Server) and to the resource IP address. This provides location independent access to data on the volume for NCP, Novell AFP, and Novell CIFS clients.

Linux POSIX Volume Cluster Resource

A cluster-enabled Linux POSIX volume. The volume is bound to the resource IP address. This provides location-independent access to data on the volume via native Linux protocols such as Samba or FTP. You can optionally create an NCS Virtual Server object (NCS:NCP Server) for the resource as described in Section 13.5, Creating a Virtual Server Object for an LVM Volume Group Cluster Resource.

NCP Volume Cluster Resource

An NCP volume (or share) that has been created on top of a cluster-enabled Linux POSIX volume. The NCP volume is re-created by a command in the resource load script whenever the resource is brought online. The NCP volume is bound to an NCS Virtual Server object (NCS:NCP Server) and to the resource IP address. This provides location-independent access to the data on the volume for NCP clients in addition to the native Linux protocols such as Samba or FTP. You must create an NCS Virtual Server object (NCS:NCP Server) for the resource as described in Section 13.5, Creating a Virtual Server Object for an LVM Volume Group Cluster Resource.

DST Volume Cluster Resource

A cluster-enabled Novell Dynamic Storage Technology volume made up of two shared NSS volumes. Both shared volumes are managed in the same cluster resource. The primary volume is bound to an NCS Virtual Server object (NCS:NCP Server) and to the resource IP address. This provides location independent access to data on the DST volume for NCP and Novell CIFS clients. (Novell AFP does not support DST volumes.)

If Novell Samba is used instead of Novell CIFS, the cluster resource also manages FUSE and ShadowFS. You point the Samba configuration file to /media/shadowfs/dst_primary_volume_name to provide users a merged view of the data.

Cluster Resource Scripts

Each cluster resource has a set of scripts that are run to load, unload, and monitor a cluster resource. The scripts can be personalized by using the Clusters plug-in for iManager.

1.6.3 Failover Planning

Heartbeat

A signal sent between a slave node and the master node to indicate that the slave node is alive. This helps to detect a node failure.

Quorum

The administrator-specified number of nodes that must be up and running in the cluster before cluster resources can begin loading.

Preferred Nodes

One or more administrator-specified nodes in the cluster that can be used for a resource. The order of nodes in the Preferred Nodes list indicates the failover preference. Any applications that are required for a cluster resource must be installed and configured on the assigned nodes.

Resource Priority

The administrator-specified priority order that resources should be loaded on a node.

Resource Mutual Exclusion Groups

Administrator-specified groups of resources that should not be allowed to run on the same node at the same time. This Clusters plug-in feature is available only for clusters running OES 2 SP3 and later.

Failover

The process of automatically moving cluster resources from a failed node to an assigned functional node so that availability to users is minimally interrupted. Each resource can be failed over to the same or different nodes.

Fan-Out Failover

A configuration of the preferred nodes that are assigned for cluster resources so that each resource that is running on a node can fail over to different secondary nodes.

Failback

The process of returning cluster resources to their preferred primary node after the situation that caused the failover has been resolved.

Cluster Migrate

Manually triggering a move for a cluster resource from one node to another node for the purpose of performing maintenance on the old node, to temporarily lighten the load on the old node, and so on.

Leave a Cluster

A node leaves the cluster temporarily for maintenance. The resources on the node are cluster migrated to other nodes in their preferred nodes list.

Join a Cluster

A node that has previously left the cluster rejoins the cluster.