12.0 Introduction to GroupWise 8 and Novell Cluster Services on Linux

Before implementing GroupWise 8 with Novell Cluster Services on Linux, make sure you have a solid understanding of Novell Cluster Services by reviewing the OES Linux Clustering documentation. When you review this information, you discover that clustering employs very specialized terminology. The following brief glossary provides basic definitions of clustering terms and relates them to your GroupWise system:

cluster: A grouping of from two to 32 servers configured using Novell Cluster Services so that data storage locations and applications can transfer from one server to another without interrupting their availability to users.

NOTE:Although a cluster can include both Linux and NetWare servers, GroupWise components on Linux servers can fail over only to other Linux servers.

node: A clustered server; in other words, a single server that is part of a cluster.

shared disk system: The hardware housing the physical disks that are shared among the cluster nodes.

shared partition: A disk partition in a shared disk system that can be accessed from any cluster node that needs the data stored on it. On Linux, Novell Cluster Services supports shared partitions (Linux traditional file system disk partitions), shared NSS volumes (Novell Storage Services volumes), and shared pools (virtual servers).

NOTE:For simplicity, this section uses the term “shared partition” to represent any of these three storage configuration alternatives. For more information, the OES 11 Novell Cluster Services 2 for Linux Administration Guide.

cluster-enabled shared partition: A shared partition for which a Cluster Resource object has been created in Novell eDirectory. The properties of the Cluster Resource object provide load and unload scripts for applications and services installed on the partition, failover/failback/migration policies for the applications and services, and the failover list for the partition.

IMPORTANT:Cluster-enabling is required for GroupWise. For more information, see the OES 11 Novell Cluster Services 2 for Linux Administration Guide.

GroupWise partition: As used in this section, a cluster-enabled shared partition that is used for GroupWise, such as for housing a domain, a post office, or a software distribution directory.

Messenger partition: As used in this section, a cluster-enabled shared partition that is used for Messenger, such as for storing conversation files, log files, temporary files, queue directories, etc.

cluster resource: A shared partition, secondary IP address, application, service, Web server, etc., that can function successfully anywhere in the cluster. Cluster resources include the GroupWise agents and the Messenger agents.

failover: The process of moving cluster resources from a failed node to a functional node so that availability to users is uninterrupted. For example, if the node where the POA is running goes down, the POA and its post office fail over to a secondary node so that users can continue to use GroupWise. When setting up cluster resources, you must consider what components need to fail over together in order to continue functioning.

fan-out-failover: The configuration where cluster resources from a single failed node fail over to several different nodes in order to distribute the load from the failed node across multiple nodes. For example, if a node runs a cluster resource consisting of a domain and its MTA, another cluster resource consisting of a post office and its POA, and a third cluster resource for the Internet Agent, each cluster resource could be configured to fail over separately to different secondary nodes.

failback: The process of returning cluster resources to their preferred node after the situation causing the failover has been resolved. For example, if a POA and its post office fail over to a secondary node, that cluster resource can be configured to fail back to its preferred node when the problem is resolved.

migration: The process of manually moving a cluster resource from its preferred node to a secondary node for the purpose of performing maintenance on the preferred node, temporarily lightening the load on the preferred node, etc.