9.1 Configuring eDirectory and Identity Manager for Use with Shared Storage on Linux and UNIX

This section provides steps for configuring eDirectory and Identity Manager for failover in a high availability cluster using shared storage. The information in this section is generalized for shared storage high availability clusters on any Linux or UNIX platform; the information is not specific to a particular cluster manager.

The basic concept to understand is that state data for eDirectory and Identity Manager must be located on the shared storage so that it is available to the cluster node that is currently running the services. In practice, this means the eDirectory datastore, typically located in /var/nds/dib, must be relocated to the cluster shared storage. The Identity Manager state data is also located in /var/nds/dib. Each eDirectory instance on the cluster nodes must be configured to use the datastore on the shared storage. Other eDirectory configuration data must also reside on shared storage.

In addition to the eDirectory datastore, it is also necessary to share NICI (Novell International Cryptographic Infrastructure) data so that server-specific keys are replicated among the cluster nodes. Rather than move the NICI data to shared storage, it is generally better to copy the NICI data to local storage on each cluster node. This is preferable so that client NICI functionality is available on a cluster node even when the cluster node is in a secondary state and is not hosting the shared storage.

Sharing eDirectory and NICI data is discussed in the following sections, and is based on these assumptions:

In this section:

9.1.1 Installing eDirectory

NOTE:NICI is installed as part of the eDirectory install process.

  1. Install eDirectory on the primary cluster node.

  2. Configure eDirectory on the primary cluster node. Either create a new tree on the primary cluster node or install the server into an existing tree. For the eDirectory server name, use something other than the name of the UNIX server. Use a name that is common to the cluster, rather than specific to one of the cluster nodes.

  3. Install the same version of eDirectory on the secondary cluster node. Do not configure eDirectory on the secondary cluster node.

    The secondary node does not have a separate tree.

9.1.2 Installing Identity Manager

  1. Install Identity Manager on the primary cluster node using the Metadirectory Server option.

    The installation process installs the Identity Manager files and configures the eDirectory tree for use with Identity Manager.

  2. Install the same version of Identity Manager on the secondary cluster node using the secondary cluster switch, by entering

    dirxml_platform.bin -DCLUSTER_INSTALL="true"
    

    During the install, choose the Metadirectory Server option.

    Using the secondary cluster switch installs the Identity Manager files but does not attempt to perform any additional eDirectory configuration. No configuration is necessary, because the secondary node does not have a separate tree.

9.1.3 Sharing NICI Data

NICI provides cryptographic services used by eDirectory, Identity Manager, and Novell client applications. When used with eDirectory, NICI provides server-specific keys. These server-specific keys must be the same on all cluster nodes where eDirectory runs as a cluster service.

There are two possible ways of sharing the NICI data:

  • Placing the NICI data on the cluster shared storage.

    The disadvantage of this method is that applications that depend on NICI will fail on a cluster node when the cluster node is not hosting the shared storage.

  • Copying the NICI data from the primary server to the secondary server's local storage.

To copy the NICI data:

  1. Rename /var/novell/nici on the secondary cluster node to something else (such as /var/novell/nici.sav).

  2. Copy the /var/novell/nici directory from the primary cluster node to the secondary cluster node.

    This can be done using scp or by creating a tar file of the /var/novell/nici directory on the primary node, transferring it to the secondary node, and untarring the directory on the secondary node.

9.1.4 Sharing eDirectory and Identity Manager Data

By default, eDirectory stores its datastore in /var/nds/dib. Other items of configuration and state are also stored in /var/nds and its subdirectories. The default configuration directory for eDirectory is /etc. The following steps are necessary to configure eDirectory and Identity Manager for use with the shared storage in a high availability cluster. These steps assume that the shared storage is mounted at /shared.

On the Primary Node

  1. Copy the /var/nds directory subtree to /shared/var/nds.

  2. Rename the /var/nds directory (for example, to /var/nds.sav).

    This is not required, but creating a backup at this stage gives you the ability to start over, if necessary, without reinstalling eDirectory.

  3. Create a symbolic link from /var/nds to /shared/var/nds (for example, ln -s /shared/var/nds /var/nds).

  4. Create the following symbolic links:

    Link from

    Link to

    /shared/var/nds/class16.conf

    /etc/class16.conf

    /shared/var/nds/class32.conf

    /etc/class32.conf

    /shared/var/nds/help.conf

    /etc/help.conf

    /shared/var/nds/ndsimonhealth.conf

    /etc/ndsimonhealth.conf

    /shared/var/nds/miscicon.conf

    /etc/miscicon.conf

    /shared/var/nds/ndsimon.conf

    /etc/ndsimon.conf

    /shared/var/nds/macaddr

    /etc/macaddr

  5. Make a backup copy of /etc/nds.conf.

  6. Move /etc/nds.conf to /shared/var/nds.

  7. Edit /shared/var/nds/nds.conf and place the following entries into the file (overwriting any current entries with the same names):

    • n4u.nds.dibdir=/shared/var/nds/dib

    • n4u.server.configdir=/shared/var/nds

    • n4u.server.vardir=/shared/var/nds

    • n4u.nds.preferred-server=localhost

    For the following entries, replace eth0:0 with the interface name of the cluster-shared ethernet interface. Also replace lo with the interface name of the localhost ethernet interface.

    • n4u.nds.server.interfaces=eth0:0@524,lo@524

    • http.server.interfaces=eth0:0@8008,lo@8008

    • https.server.interfaces=eth0:0@8009,lo@8009

  8. Create a symbolic link from /etc/nds.conf to /shared/var/nds/nds.conf.

  9. Start ndsd and verify that ndsd runs with the shared storage.

  10. Stop ndsd.

  11. Place ndsd in the cluster manager's list of resources to be hosted.

  12. Remove ndsd from the list of daemons to be started by the init process at boot time.

On the Secondary Node

  1. Rename the /var/nds directory (e.g., to /var/nds.sav). This is not strictly necessary, but backups provide a way of starting over at a point beyond the installation of eDirectory.

  2. Create a symbolic link from /var/nds to /shared/var/nds

  3. Make a backup copy of /etc/nds.conf.

  4. Remove /etc/nds.conf.

  5. Create a symbolic link from /etc/nds.conf to /shared/var/nds/nds.conf.

  6. Place ndsd in the cluster manager's list of resources to be hosted.

  7. Remove ndsd from the list of daemons to be started by the init process at boot time.

After the steps for the primary and secondary nodes are completed, start the cluster services. eDirectory and Identity Manager will start on the primary node.

9.1.5 Identity Manager Driver Considerations

Most Identity Manager drivers can run in a clustered configuration. However, the following items need to be considered:

  • The driver executables (.jar files and/or shared objects) must be installed on each cluster node.

  • If the driver must run on the same server as the application that the driver supports, then the application must also be configured to run as part of the cluster services.

  • If the driver has a configurable location for driver-specific state data, then the location must be on the cluster shared storage.

    An example of this is the LDAP driver when used without a change log, or the JDBC driver when used in triggerless mode.

  • If the driver has configuration data stored outside of eDirectory, then the configuration data must be on the shared storage or must be duplicated on each cluster node. An example of this is the Manual Task Driver's template directories.