Install Novell Cluster Services on OES Linux by following the instructions provided in the Novell Cluster Services Administration Guide for Linux for your version of OES Linux.
The cluster installation process includes:
Meeting hardware and software requirements
Setting up a shared disk system
Creating a new Cluster object to represent the cluster in Novell eDirectory
Adding nodes to the cluster
Installing the Novell Cluster Services software on all nodes in the cluster
Creating shared partitions, shared NSS volumes, or shared pools as needed for your cluster, as described in the Novell Cluster Services Administration Guide for Linux for your version of OES Linux.
NOTE:For simplicity in this section, the term “shared partition” is intended to include any of these shared storage alternatives.
Cluster-enabling any of these shared storage alternatives, as described in the Novell Cluster Services Administration Guide for Linux for your version of OES Linux.
IMPORTANT:Cluster-enabling is required for GroupWise.
Mounting the shared partitions where you want to set up GroupWise domains and post offices.
As you install Novell Cluster Services on Linux, record key information about the cluster on the System Clustering Worksheet:
SYSTEM CLUSTERING WORKSHEET
Under Item 1: eDirectory Tree for Cluster, record the name of the eDirectory tree where the new Cluster object has been created.
Under Item 2: Cluster Name, record the name of the Cluster object that you created for your GroupWise system.
Under Item 3: Cluster Context, record the full context of the Cluster object.
Under Item 4: Nodes in Cluster, list the nodes that you have added to the cluster. Include the file system information about each partition, including file system type (nss, reiserfs, ext3, and so on), device name (sda2, hda1, and so on), and mount point directory (/media/nss, /mnt, /mail, and so on). You need this information when you set up the load and unload scripts for the GroupWise cluster resources.
Under Item 5: Shared Partitions, list the volume names and volume IDs for the shared partitions that are available for use in your GroupWise system. You need this information when you set up the load and unload scripts for the GroupWise cluster resources.
The number of nodes and shared partitions that are available in the cluster strongly influences where you can place GroupWise domains and post offices. You have several alternatives:
Your whole GroupWise system can run in a single cluster.
Parts of your GroupWise system can run in one cluster while other parts of it run in one or more other clusters.
Parts of your GroupWise system can run in a cluster while other parts run outside of the cluster, on non-clustered servers.
If you do not have the system resources to run all of your GroupWise system in a clustering environment, you must decide which parts have the most urgent need for the high availability provided by clustering. Here are some suggestions:
Post offices and their POAs must be available in order for users to access their GroupWise mailboxes. Therefore, post offices and their POAs are excellent candidates for the high availability provided by clustering.
Domains and their MTAs are less noticeable to users when they are unavailable (unless users in different post offices happen to be actively engaged in an email discussion when the MTA goes down). On the other hand, domains and their MTAs are critical to GroupWise administrators, although administrators might be more tolerant of a down server than end users are. Critical domains in your system are the primary domain and, if you have one, a hub or routing domain. These domains should be in the cluster, even if other domains are not.
The GWIA might or might not require high availability in your GroupWise system, depending on the importance of immediate messaging across the Internet and the use of POP3 or IMAP4 clients by GroupWise users.
The Monitor Agent is a vital partner with the GroupWise High Availability service, described in GroupWise 2012 Installation Guide. The GroupWise High Availability service automatically restarts agents that go down under circumstances that do not cause the entire server to go down. If you want this protection for your GroupWise agents, you can run the Monitor Agent in your cluster.
There is no right or wrong way to implement GroupWise in a clustering environment. It all depends on the specific needs of your particular GroupWise system and its users.