13.2 Security Configuration

This section provides a summary of security-related configuration settings for Business Continuity Clustering 1.2.

13.2.1 BCC Configuration Settings

Table 13-2 lists the BCC configuration settings that are security-related or impact the security of BCC:

Table 13-2 BCC Security Configuration Settings

Configuration Setting

Possible Values

Default Value

Recommended Value for Best Security

Inter-cluster communications scheme

HTTP (port 5988)

HTTPS (port 5989)

HTTPS

HTTPS

Identity Manager communications

Secure/Non-secure

This is the certificate in the Identity Manager driver setup. If you create the driver with an SSL certificate, it is secure. If not, it is in the clear.

Secure

SSL certificates are mandatory for User Synchronization drivers to synchronize between trees.

BCC Administrator user

Any LUM-enabled eDirectory User

This is the user you specify when you are setting the BCC credentials.

The BCC Administrator user is not automatically assigned the rights necessary to manage all aspects of each peer cluster. When managing individual clusters, you must log in as the Cluster Administrator user. You can manually assign the Cluster Administrator rights to the BCC Administrator user for each of the peer clusters if you want the BCC Administrator user to have all rights.

Unique BCC Administrator user (not the Admin user and not the Cluster Admin user).

BCC Administrator group

Any LUM-enabled eDirectory group

bccgroup

Unique group used for BCC administration.

See Section 4.3, Configuring a BCC Administrator User and Group.

Peer cluster CIMOM URL (same as the Inter-cluster communication scheme)

http://cluster_ip_address

cluster_ip_address where https:// is assumed.

cluster_ip_address

https:// is assumed.

Default value

13.2.2 Changing the NCS: BCC Settings Attributes in the BCC XML Configuration

WARNING:You should not change the configuration settings for the NCS:BCC Settings attribute unless instructed to do so by Novell Support. Doing so can have adverse affects on your cluster nodes and BCC.

The following XML for the NCS:BCC Settings attribute is saved on the local Cluster object in eDirectory. The BCC must be restarted for changes to these settings to take effect. These are advanced settings that are intentionally not exposed in the BCC plug-in for iManager.

<bccSettings>
        <adminGroupName>bccgroup</adminGroupName>
        <authorizationCacheTTL>300</authorizationCacheTTL>
        <cimConnectTimeout>15</cimConnectTimeout>
        <cimReceiveTimeout>30</cimReceiveTimeout>
        <cimSendTimeout>30</cimSendTimeout>
        <idlePriorityThreshold>3</idlePriorityThreshold>
        <initialNormalThreads>3</initialNormalThreads>
        <initialPriorityThreads>2</initialPriorityThreads>
        <ipcResponseTimeout>15</ipcResponseTimeout>
        <maximumPriorityThreads>20</maximumPriorityThreads>
        <minimumPriorityThreads>2</minimumPriorityThreads>
        <resourceOfflineTimeout>300</resourceOfflineTimeout>
        <resourceOnlineTimeout>300</resourceOnlineTimeout>
        <scanForNewDevicesDelay>5</scanForNewDevicesDelay>
</bccSettings>

On Linux, the above XML is written to /etc/opt/novell/bcc/bccsettings.xml file.

IMPORTANT:This file might be overwritten by Business Continuity Clustering at any time. Therefore, any changes to this file on Linux are ignored and lost. All changes should be made in eDirectory.

Table 13-3 provides additional information on each setting:

Table 13-3 BCC XML Settings

Setting

Description

Default Value

<adminGroupName>

The name of the LUM-enabled group that BCC uses on Linux.

bccgroup

<authorizationCacheTTL>

The number of seconds the authorization rights are cached in the BCC OpenWBEM provider.

300 seconds

<cimConnectTimeout>

BCC CIM client connect timeout in seconds.

15 seconds

<cimReceiveTimeout>

BCC CIM client receive timeout in seconds.

30 seconds

<cimSendTimeout>

BCC CIM client send timeout in seconds.

30 seconds

<idlePriorityThreshold>

The number of idle high priority threads before BCC starts killing priority threads.

3

<initialNormalThreads>

The number of normal threads created by BCC.

3

<initialPriorityThreads>

The number of high priority threads created by BCC at startup.

2

<ipcResponseTimeout>

The timeout in seconds that BCC waits for an IPC response.

15

<maximumPriorityThreads>

The maximum number of high priority threads BCC creates.

20

<minimumPriorityThreads>

The minimum number of high priority threads BCC keeps after killing idle high priority threads.

2

<resourceOfflineTimeout>

The number of seconds BCC waits for a resource to go offline during a BCC migrate.

300

<resourceOnlineTimeout>

The number of seconds BCC waits for a resource to go online during a BCC migrate.

300

<scanForNewDevicesDelay>

The number of seconds BCC sleeps after performing a scan for new devices during a BCC migration of a resource.

5

13.2.3 Disabling SSL for Inter-Cluster Communication

Disabling SSL for inter-cluster communication should only be done for debugging purposes, and should not be done in a production environment or for an extended period of time.

To turn off SSL for inter-cluster communication, or to specify a different communication port, you need to modify the Novell Cluster Services Cluster object that is stored in eDirectory by using an eDirectory management tool such as iManager or ConsoleOne. See the Novell iManager 2.7x Administration Guide for information on using iManager.

Disabling SSL communication to a specific peer cluster requires changing the BCC management address to the peer cluster. The address is contained in the NCS:BCC Peers attribute that is stored on the NCS Cluster object.

For example, a default NCS:BCC Peers attribute could appear similar to the following example where https:// is assumed and is never specified explicitly:

<peer>
  <cluster>chicago_cluster</cluster>
  <tree>DIGITALAIRLINES_TREE</tree>
  <address>10.1.1.10</address>
</peer>

To disable SSL for inter-cluster communication, you would change the <address> attribute to specify http:// with the IP address, as shown in the following example:

<peer>
  <cluster>chicago_cluster</cluster>
  <tree>DIGITALAIRLINES_TREE</tree>
  <address>http://10.1.1.10</address>
</peer>

The BCC management address of chicago_cluster now specifies non-secure HTTP communication.

The BCC management port can also be changed by modifying the NCS:BCC Peers attribute values. The default ports for secure and non-secure inter-cluster communication are 5989 and 5988 respectively.

For example, if you want to change the secure port on which OpenWBEM listens from port 5989 to port 1234, you would change the <address> attribute value in the above examples to:

<peer>
  <cluster>chicago_cluster</cluster>
  <tree>DIGITALAIRLINES_TREE</tree>
  <address>10.1.1.10:1234</address>
</peer>

The attribute now specifies that inter-cluster communication uses HTTPS over port number 1234.

The NCS:BCC Peers attribute has a value for each peer cluster in the BCC. Attribute values are synchronized among peer clusters by the BCC-specific Identity Manager driver, so a change to an attribute value on one cluster causes that attribute value to be synchronized to each peer cluster in the BCC.

The changes do not take effect until either a reboot of each cluster node, or a restart of the Business Continuity Clustering software on each cluster node.

Table 13-4 provides an example of possible combinations of scheme and port specifier for the <address> tag for values of the NCS:BCC Peers attribute:

Table 13-4 Example of Scheme and Port Specifier Values for the NCS:BCC Peers Attribute

Value

Protocol Used

Port Used

10.1.1.10

HTTPS

5989

10.1.1.10:1234

HTTPS

1234

http://10.1.1.10

HTTP

5988

http://10.1.1.10:1234

HTTP

1234

13.2.4 Restricting the Network Address for Administration

You can restrict the network address to the loopback address (127.0.0.1) to increase the security for the BCC Administrator user (bccadmin).

BCC makes a secure connection to OpenWBEM over port 5989 on both the remote and local boxes. This port can be changed.

The cluster connection command reports the status of the OpenWBEM connection from the last time a status update was performed. Typically, this occurs every 30 seconds on the cluster’s master node, and every hour on its slaves. Running the following command forces a status update:

cluster refresh -p

OpenWBEM then makes an NCP connection to check the rights of the user who authenticated to OpenWBEM. The NCP connection itself goes to the loopback address.