Novell Home

Novell Identity Manager High Availability on PolyServe Clusters

Novell Cool Solutions: AppNote
By Yan Fitterer

Digg This - Slashdot This

Updated: 21 Oct 2005
 

Note: Special thanks go to Gordon Bookless and Steve Sigafoos at PolyServe for granting Novell access to lab facilities, and invaluable technical advice. Many thanks as well to Stuart Mansell and Daren Roberts (Novell) for mentoring and technical review.

Introduction

Novell Identity Manager (IDM) drivers are point-to-point in nature. The current intrinsic functionality of IDM does not allow for multiple end points on either side of the communication channel. If one end point becomes unavailable, the driver will cease to operate. Depending on the business processes supported by the driver(s), this may not be acceptable, hence the need for high availability.

In a fashion similar to that presented in our June 2005 article for eDirectory on Microsoft servers, this article details a solution for making Identity Manager (and eDirectory) highly available on Novell's SUSE Linux Enterprise 9 (SLES9) Linux platform. Our approach is based on a PolyServe Matrix Server, with two nodes hosting eDirectory. It should be easy to add more nodes to the pool, or to implement the solution on any other platform supported by PolyServe and eDirectory.

Intended Audience

The Background section and IDM and PolyServe section are written for both architects and technicians who want to implement highly available eDirectory IDM drivers on a Linux platform. The Technical Implementation section will be of use to technicians testing or deploying this, or a similar solution.

Some proficiency in the Linux shell as well as in the administration and monitoring of the Matrix Server is assumed. The included scripts are for illustration purposes only - they will need to be adapted for local use.

Installation of the Matrix Server is outside the scope of this document, and the availability of a fully functional Matrix Server system (two nodes minimum) is a prerequisite.

About Clustering

Clustering solutions can be broadly divided into three main categories:

  • High Performance Computing
  • High Availability
  • Load Balancing

The solution presented in this AppNote falls in the High Availability category, where the goal is to improve the availability figure for a given service. Typically, implementing High Availability on top of an existing service adds a "9" to the the availability figure. In other words, if for example the service is available 99% of the time (3.5 days downtime a year), a well-implemented High Availability system typically makes the solution available 99.9% of the time (8.7 hours downtime a year).

This solution does not add load balancing capabilities, and therefore does not increase the processing capacity of eDirectory/IDM. Only one server in the pair will be active at any one time. This is known as an active/passive setup, where the clustered application/service runs on one of the cluster nodes only at any point in time.

IDM and eDirectory Clustering

eDirectory is by design a distributed and replicated database, and clustering it offers little advantage. Historically (on NetWare), the database was tied to the NetWare SYS volume, which in itself could not be clustered. On Linux, Windows, or other UNIX platforms, the location of the eDirectory database is configurable. By locating it on a shared storage, eDirectory can be clustered in active/passive mode. This offers limited benefits in pure eDirectory terms, but does allow for IDM to be effectively clustered for High Availability.

Clustering, or other high-availability features, are not part of the IDM product feature set. On Linux, other flavors of Unix, and Windows, IDM can be clustered as a 'side effect' of clustering eDirectory, as the IDM core engine runs as part of the eDirectory processes. The IDM cache is automatically clustered at the same time, as by default IDM cache files follow the eDirectory database files.

PolyServe Clustering - Matrix Server

PolyServe's approach to clustering in the Matrix Server product lends itself well to both high availability and load balancing configurations. The Matrix Server offers a symmetric Cluster File System (CFS) with a distributed lock manager and a proprietary file system format. Data hosted by the CFS can therefore be accessed in parallel by all nodes in the matrix.

Applications are configured as "vhosts," and nodes in the matrix are assigned to servicing each vhost. The clustered application can then be migrated automatically (node, network or service failure) or manually between all the assigned nodes. The Matrix Server offers both a Java-based graphical interface and a full set of command line tools.

Heartbeat Clustering

A solution for clustering eDirectory and IDM using the Open Source Heartbeat cluster manager software has been published by Novell in this TID.
The approach to the PolyServe solution is based on work done for the Heartbeat setup.

Benefits of the PolyServe Solution

Although Heartbeat is a capable cluster manager, the PolyServe solution has a number of additional benefits:

  • Multi-purpose cluster scalable beyond two nodes (SLES 9 version of heartbeat is two nodes only)
  • Graphical interface for configuration and management
  • Integrated Fibre Channel fabric fencing
  • Integrated iLO (or other hardware management cards) fencing (Web-based fencing)
  • No need for STONITH devices

However, Heartbeat does provide a low-cost IDM clustering option with minimal hardware requirements and no additional software beyond what is provided as standard with SUSE Linux Enterprise Server. Heartbeat, unlike PolyServe Matrix Server, does not require a SAN.

IDM Clustering and PolyServe

Clustering eDirectory and Identity Manager with PolyServe involves the creation of a new vhost on the Matrix server. The eDirectory processes are then monitored and managed by a monitor attached to the vhost.

A shared filesystem is of course necessary, and will store the eDirectory configuration, DIB set, cache files, IDM cache, and (optionally) the NICI data. The filesystem must be made available on identical mountpoints on all the nodes that will host the service. Due to the active/passive deployment method, the filesystem will not make use of the symmetric access capability of the PolyServe File System. The filesystem will be available and mounted permanently on all nodes.

Due to the concurrent availability of the shared filesystem on all nodes, the stop script must make absolutely sure that in no case will the eDirectory processes be left running. The example stop script below uses the stop script that ships with eDirectory. As with any active/passive clustering implementation, should more than one instance of eDirectory have concurrent access to the database, severe corruption of the database is likely.

A virtual IP address for the vhost is needed - it will be managed by the Matrix Server and assigned dynamically to the active node. eDirectory will be configured to use that IP address and ignore all other addresses that may be available on the active node.

Management of the eDirectory tree is best done with a standalone (i.e., not clustered) instance of iManager. When the eDirectory server migrates node inside the Matrix Server, iManager may be unable to reach the tree for a short period of time after the eDirectory process has started on the new node. Testing has shown that iManager normally recovers by itself fairly quickly. You may need to login again. All eDirectory and IDM command line tools (ndstrace, ndslogin, ndsconfig, ndsstat, etc...) work as normal if invoked on the active node for the eDirectory vhost. They will not run, of course, if invoked from any other node.

This solution has been tested only with two nodes, but there is no reason additional nodes shouldn't be allowed to host the vhost.

The eDirectory and IDM software must be installed on each node separately. The local configuration is then amended to point to the shared filesystem. In addition, the NICI data must be synchronized between all hosts.

From the IDM driver perspective, most drivers can be successfully clustered, but you must be aware of the following:

  • The driver executables (.jar files and/or shared objects) must be installed on each node (as is eDirectory).
  • If the driver needs to run locally to the supported application, then the application will need to be clustered alongside eDirectory and IDM. This will require specific and much more complex probe, start, stop and recovery scripts.
  • If the driver has a configurable location for driver-specific state data, then the location must be on the shared filesystem. Examples of this are the LDAP driver when used without a changelog and the JDBC driver when used in triggerless mode.
  • If the driver stores configuration data on the filesystem, then the configuration data must be on the shared storage or must be duplicated on each cluster node. An example of this is the Manual Task Driver's template directories.

Technical Implementation

Setup Overview

This solution was prepared with eDirectory for Linux 8.7.3 and IDM 2.0.1 with latest service packs for SLES9, eDirectory, and IDM, as of June 2005. A similar approach may work with earlier or later versions, but has not been tested.

These are the required setup steps:

  1. Gather setup information.
  2. Set up the Matrix Server and configure at least one shared filesystem.
  3. Mount the filesystem on all nodes, making the mount persistent.
  4. Install eDirectory on all nodes (but don't configure).
  5. Configure eDirectory on the Primary node.
  6. Install IDM on the primary node (normal setup).
  7. Install IDM on all other nodes (Cluster mode).
  8. Synchronize the NICI data.
  9. Configure eDirectory for Clustering on Primary nodes.
  10. Modify the configuration on Secondary nodes to point to shared database.
  11. Create the PolyServe vhost and relevant scripts.

While all installation steps are described through command-line tools, the PolyServe vhost and monitor can be created through the graphical interface. eDirectory and IDM must be installed and clustered via the command line, but further configuration is done via iManager. An instance of iManager installed outside the cluster will be needed for actual IDM driver configuration.

One of the nodes must be chosen as Primary node for the purpose of the setup procedure. The node considered Primary from the PolyServe perspective need not necessarily be the same.

Note: Although it is technically possible to cluster iManager, that may not achieve significant benefits. This document does not describe how to cluster iManager.

Step 1: Gathering setup information

To complete the process, you will need to do the following:

  • Get root access the appropriate nodes in the Matrix
  • Satisfy eDirectory installation pre-requisites
  • Have appropriate disk space for the eDirectory database and IDM cache on the shared filesystem
  • Have the installation files for eDirectory and IDM
  • Get the latest patches for eDirectory and IDM
  • Provide an IP address for the clustered eDirectory instance (10.12.4.250 in examples below)
  • Provide a Tree name (new or existing; PST in examples below)
  • Provide an eDirectory Server object name and context (psidm.srvs.pslabs in examples below)
  • Provide an eDirectory admin password (or appropriate access if joining an existing tree)
  • Provide an IP addresses for the appropriate nodes in the matrix (10.12.4.3 and 10.12.4.4 in examples below)

Step 2: Set Up the Matrix Server

Setting up the Matrix is beyond the scope of this document. Please refer to the Matrix Server documentation for setup details.

Step 3: Mount the Filesystem

Ensure that the shared filesystem has sufficient space to host your eDirectory database plus the the IDM cache. It is advisable to create a dedicated volume for the eDirectory/IDM database, to limit the impact of out-of-disk-space situations.

Step 4:Install eDirectory on all nodes

  1. Ensure OpenLDAP is not running (uninstallation is recommended).
  2. Add the gettext package from YaST, as it is not part of the default SLES9 installation.
  3. Verify that ports 389, 524, 636, 8008, and 8009 are available on the Matrix.
  4. Run nds-install from the installation media. See the detailed steps on the Novell Documentation site.
  5. Important: Do NOT configure eDirectory. Ignore the ndsconfig steps from the link above.
  6. Remove eDirectory from the list of services to be started at boot time on all nodes.

Step 5: Configure eDirectory on the Primary node

Configure (use the ndsconfig utility) eDirectory on the primary node ONLY. You can either join an existing tree, or create a new tree. During our tests, we created a new tree with the following command:

ndsconfig add -S psidm -t pst  -n "ou=srvs.o=pslabs" -a "cn=admin.o=pslabs"

Detailed steps are available in the eDirectory documentation.

Leave eDirectory unconfigured on all other nodes.

Step 6: Install IDM on the primary node

Install IDM 2.0.1 on the primary node using the "DirXML Server" option. Run the dirxml_linux.bin executable from the installation media (typically in linux/setup directory). Detailed steps are available in the IDM documentation.

Step 7: Install IDM on other node(s)

The IDM installation command is slightly different on the other node(s), as there is no eDirectory database present locally.

1. Start the IDM setup with the following command:

./dirxml_linux.bin -DCLUSTER_INSTALL="true"

2. Configure eDirectory for Clustering on Primary node.

The shell script below illustrates the steps needed to cluster eDirectory on the primary node, which are:

  • Move the eDirectory database to the shared storage.
  • Move the eDirectory configuration file to the shared storage.
  • Modify the configuration for new file locations and network parameters.

The clustering approach does not specify the installation of eDirectory binaries on the shared filesystem. Patches, upgrades, and other maintenance tasks will therefore have to be applied to each node separately. The same applies to the IDM driver's code and configuration files. The eDirectory configuration file (ndsd.conf), however, will be stored on shared storage, and any changes need to be done once only.

#!/bin/bash

# This script has been tested on Linux systems, and may not work on other platforms.
# Do not use in production without thorough testing.
# The script expects eDirectory to be installed and configured.
# The eDirectory deamon must be stopped before running this script.

# Amend the entries below to suit your environment

srcdir="$(/usr/bin/ndsconfig get n4u.server.vardir | cut -d= -f2)"
dstdir=/mnt/shared/nds
cluster_ip=10.12.4.250
local_iface=lo


function do_error {
  printf "ERROR --- ${1}. Aborting.\n" >&2
  exit 1
}

# Validate environment
test -d "$srcdir" || do_error "Source Directory $srcdir doesn't exist"
test -d "$dstdir" || do_error "Destination directory $dstdir doesn't exist"
test -z $(ls -A $dstdir) || do_error "Destination directory $dstdir isn't empty"
test -w /etc/nds.conf || do_error "eDir configuration file /etc/nds.conf isn't there or writable"

# Is eDir running?
if [ -f "$srcdir/ndsd.pid" ] ; then
  pid=$(ps h -p $(cat "$srcdir/ndsd.pid"))
  test -z "$pid" || do_error "eDirectory is running. Shut it down first"
  do_error "eDirectory seems to have crashed. Clear the mess first"
fi

# Copy the eDirectory 'vardir' to the shared storage:
cp -a "$srcdir"/* "$dstdir" || do_error "Failed to copy eDirectory to shared storage"

# Rename original srcdir for backup purposes
mv "$srcdir" "${srcdir}.old"

# Create link from /var/nds to new location
ln -s "$dstdir" "$srcdir"

# Create various symlinks in the new location pointing to locally-installed files.
ln -s /etc/class16.conf "${dstdir}/class16.conf"
ln -s /etc/class32.conf "${dstdir}/class32.conf"
ln -s /etc/help.conf "${dstdir}/help.conf"
ln -s /etc/ndsimonhealth.conf "${dstdir}/ndsimonhealth.conf"
ln -s /etc/miscicon.conf "${dstdir}/miscicon.conf"
ln -s /etc/ndsimon.conf "${dstdir}/ndsimon.conf"
ln -s /etc/macaddr "${dstdir}/macaddr"

# Backup and move nds.conf
cp -dp /etc/nds.conf /etc/nds.conf.bak
mv /etc/nds.conf "$dstdir"

# Insert new values in nds.conf
ndsconf="${dstdir}/nds.conf"
if [ -z "$(grep n4u.nds.dibdir $ndsconf)" ] ; then
  printf "n4u.nds.dibdir=${dstdir}/dib\n" >> "$ndsconf"
else
  sed -i.sed "s|n4u.nds.dibdir=.*|n4u.nds.dibdir=${dstdir}/dib|" "$ndsconf"
fi
if [ -z "$(grep n4u.server.configdir $ndsconf)" ] ; then
  printf "n4u.server.configdir=${dstdir}\n" >> "$ndsconf"
else
  sed -i.sed "s|n4u.nds.configdir=.*|n4u.nds.configdir=${dstdir}|" "$ndsconf"
fi
if [ -z "$(grep n4u.server.vardir $ndsconf)" ] ; then
  printf "n4u.server.vardir=${dstdir}\n" >> "$ndsconf"
else
  sed -i.sed "s|n4u.nds.vardir=.*|n4u.nds.vardir=${dstdir}|" "$ndsconf"
fi
if [ -z "$(grep n4u.nds.preferred-server $ndsconf)" ] ; then
  printf "n4u.nds.preferred-server=localhost\n" >> "$ndsconf"
else
  sed -i.sed "s/n4u.nds.preferred-server=.*/n4u.nds.preferred-server=localhost/" "$ndsconf"
fi
if [ -z "$(grep n4u.server.interfaces $ndsconf)" ] ; then
  printf "n4u.server.interfaces=${cluster_ip}@524,${local_iface}@524\n" >> "$ndsconf"
else
  sed -i.sed "s/n4u.server.interfaces=.*/n4u.server.interfaces=${cluster_ip}@524,${local_iface}@524/" "$ndsconf"
fi
if [ -z "$(grep http.server.interfaces $ndsconf)" ] ; then
  printf "http.server.interfaces=${cluster_ip}@8008,${local_iface}@8008\n" >> "$ndsconf"
else
  sed -i.sed "s/http.server.interfaces=.*/http.server.interfaces=${cluster_ip}@8008,\
        ${local_iface}@8008/" "$ndsconf"
fi
if [ -z "$(grep https.server.interfaces $ndsconf)" ] ; then
  printf "https.server.interfaces=${cluster_ip}@8009,${local_iface}@8009\n" >> "$ndsconf"
else
  sed -i.sed "s/https.server.interfaces=.*/https.server.interfaces=${cluster_ip}@8009,\
        ${local_iface}@8009/" "$ndsconf"
fi

# Clean up behind sed
rm "${ndsconf}.sed"

# Link nds.conf
ln -s "$ndsconf" /etc/nds.conf

printf "Primary node configured.\n"
exit 0

3. Modify configuration on Secondary nodes to point to shared database.

On all other nodes, a few simple steps are required to remove any local database, and point the local configuration to the shared storage.

#!/bin/bash

# This script has been tested on Linux systems, and may not work on other platforms.
# Do not use in production without testing.
# This script expects eDirectory to be installed (but not necessarily configured).
# The eDirectory deamon must be stopped before running this script.

# Amend the entries below to suit your environment
srcdir=/var/nds
dstdir=/mnt/shared/nds

function do_error {
  printf "ERROR --- ${1}. Aborting\n" >&2
  exit 1
}

# Validate environment
test -d "$srcdir" || do_error "Source Directory $srcdir doesn't exist"
test -d "$dstdir" || do_error "Destination directory $dstdir doesn't exist"

# Is eDir running?
if [ -f "$srcdir/ndsd.pid" ] ; then
  pid=$(ps h -p $(cat "$srcdir/ndsd.pid"))
  test -z "$pid" || do_error "eDirectory is running. Shut it down first"
  do_error "eDirectory seems to have crashed. Clear the mess first"
fi

# Rename local eDir database for backup purposes
mv "$srcdir" "${srcdir}.old"

# Create link from /var/nds to new location
ln -s "$dstdir" "$srcdir"

# Backup and remove nds.conf
test -f /etc/nds.conf && mv /etc/nds.conf /etc/nds.conf.bak

# Link nds.conf
ln -s "${dstdir}/nds.conf" /etc/nds.conf

printf "Secondary node configured.\n"
exit 0

Step 8: Synchronize the NICI data

eDirectory creates a set of local Novell International Cryptography Infrastructure (NICI) and provides Private Key Infrastructure (PKI) services to eDirectory. A number of files are initialized locally to store PKI related material. This data needs to be initialized to identical values on all cluster nodes. The recommended approach to achieve this is to synchronize once only the NICI data after eDirectory installation.

Step 9: Configure eDirectory for Clustering on Primary nodes

Ensure eDirectory is not running, then copy the entire contents (including any subdirectories) of the '/var/novell/nici' directory from the primary eDirectory node to all other nodes in the cluster.

Step 10: Verify eDirectory operation

1. Verify that eDirectory can be started and stopped on all relevant hosts in the Matrix. On each node, start eDirectory manually:

/etc/init.d/ndsd start

2. Verify that ndstrace loads and that eDirectory operates correctly. Then stop eDirectory on the node with:

/etc/init.d/ndsd stop

3. Repeat the process on all nodes.

Step 11: Create the PolyServe vhost and relevant scripts

The steps below describe a new vhost with two network interfaces (two nodes), automatic failover and no failback. As mentioned earlier, more nodes could be added for extra redundancy. The "no failback" option was chosen due to the interruption in service caused by each transition.

1. Create the new vhost.

Use the 'mx vhost add' command, or the GUI. For example:

mx vhost add --policy nofailback 10.12.4.250 10.12.4.3 10.12.4.4

When the vhosts properties are viewed from the graphical interface, the '10.12.4.250' vhost should now show this:

Figure 1 - Virtual Host properties

2. Create the monitor script.

The monitor script is best located on local storage, rather than on one of the shared filesystems. The disadvantage is that multiple copies need to be kept, but the benefit is that if the shared volume become inaccessible, the Matrix should be able to shut down the eDirectory service nevertheless.

The script below covers the 'start', 'stop' and 'monitor' functions. This approach makes for easier maintenance of the scripts and less risk of discrepancies than a separate scripts approach.

3. Save the script as /opt/polyserve/methods/ndsd_meth

Sample Script

#!/bin/bash

wait_psfs() {

        trap "hangup; exit 2" 1 2 3

	if [[ -n $DEBUG ]]; then
		set -x
	fi

		if [[ -a /etc/redhat-release ]]; then
			PIDLOCK=/var/lock/subsys/pmxs
		elif [[ -a /etc/SUSE-release ]]; then
			if [[ `cat /etc/SUSE-release | awk '$1 ~ /^VER/{print $3}'` = 8 ]]; then
				PIDLOCK=/var/run/pmxs.pid
			else
				PIDLOCK=/var/lock/subsys/pmxs
			fi
		fi

	while true
	do
		# Service monitor will wait up to 30 minutes for MxS to start and mounts to complete
		if [[ $cnt -ge 1800 ]]; then
			echo "MxS failed to start, exiting.."
		fi
			if [[ -a $PIDLOCK ]]; then
				break
			fi
		sleep 5
		(( cnt = $cnt + 5 ))
		echo "Waited $cnt seconds for mounts to complete."
	done

	# Take a quick look to insure OS has all filesystems are mounted.
	awk '$1 ~ /^psd/{print $2}' $PSFSTAB | while read FS
	do
		if ( ! /bin/mount | awk '$1 ~ /\/dev/{print $3}' | egrep "^${FS}$" > /dev/null ); then
			echo "$FS is not mounted, exiting.."
			exit 1
		fi
	done
	return 0
}

startup_method() {

	trap "hangup; exit 2" 1 2 3

	if [[ -n $DEBUG ]]; then
		set -x
	fi

	# Determine if virtual host is alive and aliased locally
	if ( ! /sbin/ifconfig | grep $MX_VHOST > /dev/null ); then
        	echo "FAILED: $MX_VHOST is not local on `uname -n`"
        	return 0
	elif ( ! ping -w $TIMEOUT -c 1 $MX_VHOST &> /dev/null ); then
        	echo "FAILED: $MX_VHOST does not respond to ping on `uname -n`"
        	return  0
	fi

	echo "`date '+%c'` Starting NSD Service monitor."
	/etc/init.d/ndsd start
	return 0
}

shutdown_method() {

	trap "hangup; exit 2" 1 2 3

	if [[ -n $DEBUG ]]; then
		set -x
	fi

	# Since a stop occurs first on startup wait for all filesystems to be mounted.
	wait_psfs

	echo "`date '+%c'` Stopping NDS Service monitor."
	/etc/init.d/ndsd stop
	return 0
}


verify_method() {

trap "hangup; exit 2" 1 2 3

	if [[ -n $DEBUG ]]; then
		set -x
	fi

	PIDCHK=0
	# Set these variables according to MxS or commandline execution
	if [[ $MX_ACTIVE_STATE = ACTIVE ]]; then
        	ps -p `cat $vardir/ndsd.pid` > /dev/null  2>&1
		if [[ $? != 0 ]]; then
                	echo "`date '+%c'` Directory server on $MX_SERVER is not running or has crashed."
                	echo "Last known running pid `cat $vardir/ndsd.pid` is not a valid PID"
			PIDCHK=1
		fi
			# run ndsstat to see if ndsstat thinks that it is up
			/etc/init.d/ndsd status >/dev/null 2>&1
			if [[ $? != 0 ]]; then
                		echo "`date '+%c'` Directory server on $MX_SERVER is not running or has crashed."
        			echo "\"ndsd status\" is returning a down state."
				PIDCHK=1
			fi
		# Return true if both tests exit 0
		if [[ $PIDCHK = 1 ]]; then
			return 1
		else
			return 0
		fi
	else
		return 0
	fi
}

######### Main ###########

PROGNAME=`basename $0`

PATH=$PATH:/usr/bin

# Service monitor  usage
# USAGE="USAGE:$PROGNAME"

trap "hangup; exit 2" 1 2 3

if [[ -n $DEBUG ]]; then
	set -x
fi

TIMEOUT=5
PSFSTAB=/etc/polyserve/psfstab
vardir=$( ndsconfig get n4u.server.vardir | cut -d= -f2)

if [[ ! -d $vardir ]]; then
	echo "[ERROR] $vardir does not exist, cannot determiine state of NDSD."
	exit 1
fi

# Determine method to execute
case $MX_METHOD in
	start) startup_method
	       ;;
	stop) shutdown_method
	       ;;
	restart) echo "Restarting Service monitor `date '+%D %T'`"
		 shutdown_method
		 startup_method
	       ;;
	probe) verify_method
	       ;;
	verify) verify_method
	       ;;
	*) echo $USAGE
	   exit 1
	       ;;
esac

# Exit to calling program exit code
exit $?

4. Add the monitor script to vhost.

We now need to tie the monitor script we just created to our vhost:

mx service add --type CUSTOM  --probeSeverity autorecover \
   --eventSeverity consider --timeout 60 --frequency 60 \
   --parameters '"/opt/polyserve/methods/ndsd_meth"' \
   --ordering SERIAL --recoveryScript '""' \
   --recoveryTimeout 0 --startScript \
   '"/opt/polyserve/methods/ndsd_meth"' \
   --startTimeout 120 --stopScript \
   '"/opt/polyserve/methods/ndsd_meth"' \
   --stopTimeout 120 --priority 0 10.12.4.250:ndsd

The monitor now should look like this in the graphical console:

Figure 2 - Service Monitor Properties

And the Advanced screen should look like this:

Figure 3 - Advanced Service Configuration

5. Start the service.

The service will be automatically started on the primary vhost upon instantiation. We should have now a fully-configured eDirectory and IDM cluster.

We can stop and start the service with:

mx service enable 10.12.4.250:ndsd 10.12.4.3
mx service enable 10.12.4.250:ndsd 10.12.4.4

References and links

PolyServe Matrix Server
http://www.polyserve.com/products_mslinux.php
SUSE Linux Enterprise Server
http://www.novell.com/products/linuxenterpriseserver
Novell eDirectory
http://www.novell.com/products/edirectory/
Novell eDirectory Documentation
http://www.novell.com/documentation/edir873/index.html
Novell Msure Identity Manager
http://www.novell.com/products/nsureidentitymanager
Novell Nsure Identity Manager Documentation
http://www.novell.com/documentation/dirxml20/index.html
Clusters
http://en.wikipedia.org/wiki/Computer_cluster
Heartbeat
http://www.linux-ha.org/HeartbeatProgram
Novell Nsure Identity Manager - High Availability Cluster - Heartbeat
http://support.novell.com/cgi-bin/search/searchtid.cgi?/10093317.htm


Novell Cool Solutions (corporate web communities) are produced by WebWise Solutions. www.webwiseone.com

© 2014 Novell