30.6 Considering How to Install and Configure the Linux Agents in a Cluster

There are several cluster-specific issues to consider as you plan to install the Linux MTA and POA in your clustered GroupWise system:

30.6.1 Recording Secondary IP Addresses for the Agents

By default, the GroupWise agents listen on all IP addresses that are bound to the server. This means that any time there is a possibility of two of the same type of agent starting on the same node, it is important that each agent use the secondary IP address associated with its cluster resource group, not the physical IP address of the node. Secondary IP addresses are created by setting up IP address resources, as described in Section 30.1.4, Planning Secondary IP Addresses. The IP address resource moves with each agent when it fails over, so that, in the case of the POA, GroupWise clients do not lose their connections to the POA. When you use the Configure GroupWise for Clustering option, the GroupWise Installation program sets the --ip switch in each agent startup file to its unique secondary IP address.

If you are going to set up a GroupWise name server to help GroupWise clients locate their post offices, make sure that the default POA port number of 1677 is used somewhere in the cluster. For more information, see Simplifying Client/Server Access with a GroupWise Name Server in Post Office Agent in the GroupWise 7 Administration Guide.

GROUPWISE CLUSTERING WORKSHEET

Under Item 3: MTA Network Information, select an IP address resource from the Heartbeat Clustering Worksheet (item 5) and record the secondary IP address associated with the IP address resource on the GroupWise Clustering Worksheet.

Under Item 5: POA Network Information, select an IP address resource from the Heartbeat Clustering Worksheet (item 5) and record the secondary IP address associated with the IP address resource on the GroupWise Clustering Worksheet.

30.6.2 Determining Appropriate Heartbeat Constraints for GroupWise Cluster Resource Groups

By default, a cluster resource group has all nodes in the cluster available for failover. Only one node at a time can have a particular cluster resource group mounted and active. If a cluster resource group’s initial node fails, the resource group fails over to another node in the cluster.

You should customize the Heartbeat constraints for each GroupWise cluster resource group based on the fan-out-failover principle. When a node fails, its cluster resource groups should not all fail over together to the same node. Instead, the resource groups should be distributed across multiple nodes in the cluster. This prevents any one node from shouldering the entire processing load typically carried by another node. In addition, some cluster resource groups should never have the potential of being mounted on the same node during a failover situation. For example, a post office and POA that service a large number of very active GroupWise client users should never fail over to a node where another very large post office and heavily loaded POA reside. If they did, users on both post offices would notice a decrease in responsiveness of the GroupWise client.

When you create a Heartbeat constraint, you give it a unique ID, associate it with one or more cluster resource groups, and provide information specific to the constraint type. Heartbeat offers three types of constraints that control the failover behavior of cluster resource groups in the cluster. A cluster resource group can use multiple types of constraints to assure the desired failover behavior.

Place Constraint

The place constraint controls where nodes in a cluster resource group can fail over. A place constraint includes a score. A score of 100 (the default) means that the resource group to which the constraint is applied should run on the node assigned to the place constraint whenever possible. A score of 50 means that the resource group to which the constraint is applied can fail over to the assigned node when the initial node fails. A score of 0 means that the resource group never runs on that node, perhaps because that node is used for some other program. A score of INFINITY means that the resource group can only run on that node.

HEARTBEAT CLUSTERING WORKSHEET

Under Item 9: Cluster Resource Group Constraints for Domain, list the nodes that might need to mount the domain. The MTA might need to run on any node that the domain constraints allow. Therefore, you will install the agent software on all of the nodes where the domain could possibly fail over. List the score for each node.

If you are planning the post office in a different cluster resource group from where the domain is located, under Item 10: Cluster Resource Group Constraints for Post Office, list the nodes that might need to mount the post office. The POA might need to run on any node that the post office constraints allow. Therefore, you will install the agent software on all of the nodes where the post office could possibly fail over. List the score for each node.

Order Constraint

The order constraint lets you place a cluster resource group either before or after another resource group. For example, if you have the domain and post office in separate resource groups and you want the domain to start before the post office, you would use an order constraint.

HEARTBEAT CLUSTERING WORKSHEET

Under Item 9: Cluster Resource Group Constraints for Domain, list any order constraints regarding the domain.

If you are planning the post office in a different cluster resource group from where the domain is located, under Item 10: Cluster Resource Group Constraints for Post Office, list any order constraints regarding the post office.

Colocation Constraint

The colocation constraint lets you designate resource groups that must run together on the same node. For example, you might want the MTA and the Internet Agent for a domain to always run together whenever the domain fails over. Or you might want the MTA and the WebAccess Agent to always run together on the same node. To guarantee that two resource groups always run together, give them a score of INFINITY in the colocation constraint.

HEARTBEAT CLUSTERING WORKSHEET

Under Item 9: Cluster Resource Group Constraints for Domain, list any resource groups that you want to run along with the domain.

If you are planning the post office in a different cluster resource group from where the domain is located, under Item 10: Cluster Resource Group Constraints for Post Office, list any resource groups that you want to run along with the post office.

30.6.3 Planning the Linux Agent Installation

Aside from the cluster-specific issues discussed in the preceding sections, the considerations involved in planning to install the GroupWise Linux agents are the same in a clustering environment as for any other environment. Review Planning the GroupWise Agents, then print and fill out the GroupWise Agent Installation Worksheet in Installing GroupWise Agents in the GroupWise 7 Installation Guide for each domain and post office for which you will install the Linux MTA or POA.

IMPORTANT:Do not install the Linux agent software until you are instructed to do so in Section 32.0, Setting Up a Domain and a Post Office in a Heartbeat Cluster.

Skip to Section 32.0, Setting Up a Domain and a Post Office in a Heartbeat Cluster.