Novell Home

Configuring a Linux High Availability Cluster for IDM 3 and eDirectory 8.8

Novell Cool Solutions: AppNote
By Jon Hardman

Digg This - Slashdot This

Posted: 14 Feb 2007
 

Configuring a Linux High Availability Cluster for IDM 3 and eDirectory 8.8 using SLES 10 and iSCSI

Derived from the Novell Identity Manager 2.0 idmcluster setup:
http://support.novell.com/cgi-bin/search/searchtid.cgi?10093317.htm

Overview

Many IDM customers question whether Identity Manager can be used in a clustered environment. There are many different definitions of clustering, but IDM 2.0 did support a "High Availability" (i.e., failover) cluster on Linux using eDirectory 8.7.3.x. This solution can also be implemented on SUSE Linux Enterprise Server 10 (SLES 10) using eDirectory 8.8.x, IDM 3.x, and your choice of shared storage (in this case, iscsi).

This AppNote describes how to install and configure eDirectory 8.8.x and IDM 3.x using High Availability Clustering (non-crm) on SUSE Linux Enterprise Server 10, "out of the box."

Environment

The hardware used in this document is generic, Intel-based commodity hardware with single SATA hard disk drives (/dev/sda). Each drive was partitioned with a /boot (/dev/sda1), swap (/dev/sda2), and / (/dev/sda3) partition, as well as a shared partition on the iscsi server (/dev/sda4).

  • The filesystem used for the iscsi partition is reiserfs.
  • The eDirectory version for the test is 8.8.1.
  • IDM was verified to work for both IDM 3.0.x and IDM 3.5.x.

The environment was tested using the eDirectory driver to connect the 'HA Tree' with another eDirectory tree running in a separate environment. This scenario was scale-tested to 500,000 user objects (250,000 in each tree), using password sync 2.0 to synchronize user's passwords between the two trees. An hourly cron job was put in place on both node1 and node2 of the HA cluster to simulate HA standby fail-over twice an hour.

Figure 1 - Environment overview

Software Requirements

iSCSI target:

  • SLES 10
  • iscsitarget
  • yast2-iscsi-server

HA Cluster Nodes:

    SLES 10
  • open-iscsi
  • yast2-iscsi-client
  • HA Pattern
  • eDirectory 8.8 or greater
  • IDM 3.0 or greater

Hardware Requirements

iSCSI target:

  • At least one network card
  • Available disk space to share as an iscsi partition

HA Cluster Nodes:

  • One available serial port per node
  • Null-modem cable (for STONITH control)
  • 2 NIC cards per node (one for external access, the other for Heartbeat private connection)
  • Crossover network cable (for private HA connection)

Installation

iSCSI Target

1. During SLES 10 installation, create a separate partition (in this case /dev/sda4) to be used as the iscsi shared storage partition.

In a testing environment, reiserfs showed more stability than ext3, though any supported file system should work. This partition will be mounted by /etc/fstab on the local box as "/shared".

2. Include the yast2-iscsi-server and iscsitarget packages.

When the SLES 10 install is completed, set it up as the iscsi server as follows:

1. Go to Yast2 -> Network Services -> iSCSI Target.

2. In the 'Service' tab, set Service Start to 'When Booting'.

3. Under 'Global' tab, set any authentication if desired/required. (No authentication is used in this example.)

4. In the Targets tab, add partition /dev/sda4/. You shouldn't need to edit the Target, Identifier, or LUN entries (see the figure below).

Figure 2 - Adding the /dev/sda4 partition

4. Click Finish and allow the iscsi services to restart.

iSCSI shared storage should now be available to your HA nodes.

eDirectory Setup: Node 1 Installation

1. Install SLES 10, making sure to add yast2-iscsi-client, open-iscsi, and HA pattern.

2. Set one NIC to your externally facing IP address, and the second NIC to an internal address that will be used by HA. In this example, the hostname is node1, eth0 is 192.168.255.190 (external), and eth1 is 10.0.0.1 (private HA).

3. Finish the installation.

iSCSI setup

1. Run mkdir /shared.

2. Go to Yast2 > Network Services > iSCSI Initiator.

3. Under 'Service' tab, set Service Start to 'When Booting'. 'Connected Targets' should be blank.

4. In the Discovered Targets tab, click Discovery.

5. Enter the iSCSI target server's IP address (the default port should be OK). Discovery should find the iSCSI target server's partition.

6. Log in (No Authentication). Discovered Targets > Connected should now read "true."

7. Go back to the 'Connected Targets' tab and 'Toggle Start-Up' to set Start-Up to 'automatic'.

8. Click Finish.

9. Run the command 'dmesg' to show the SCSI device /dev/sdb as available.

10. Mount iscsi target (/dev/sdb) as /shared (mount -t reiserfs /dev/sdb /shared)

eDirectory and IDM Installation

1. Manually create a virtual adapter (ifconfig eth0:0 192.168.255.192). This will be your HA cluster's virtual ip address.

2. Install eDirectory 8.8.x.

3. Set the eDirectory PATH (. /opt/novell/eDirectory/bin/./ndspath)

4. Configure eDirectory (ndsmanage or ndsconfig, data and instance goes on /shared, nds.conf in /root/, eDirectory 'listening' on HA virtual ip address). You may also want to set the eDirectory ncp server name to something unique to the cluster (such as svr192 in this example).

5. Verify eDirectory installation (ndsstat). eDirectory should be up and running on Node 1.

6. Shut down eDirectory (ndsmanage stopall).

7. Edit /root/ file and change 'preferred server' to your HA virtual IP address. In this case, the entry would read "n4u.nds.preferred-server=192.168.255.192".

8. Edit /etc/hosts and verify all nodes and iscsi server entries (comment out "127.0.0.2" if it is in the file).

9. Restart eDirectory (ndsmanage startall).

10. Install IDM using the Metadirectory Server install pattern as usual.

Disabling ndsd Start at Boot Time

1. Go to Yast2 > System > System Services (Runlevel)

2. Disable the ndsd start at boot (you could also edit the appropriate files in the /etc/rc.d runlevels).

3. Click Finish.

ndsd should now be shut down on Node 1.

eDirectory Setup: Node 2 Installation

Now that eDirectory and IDM are installed and configured on node 1, you must configure Node 2.

On Node 1:

1. Shut down eDirectory if running (ndsmanage stopall).

2. Run "umount /shared" (be sure the ndsd process has stopped or /shared may not unmount correctly).

3. Release your virtual ip address (ifconfig eth0:0 down).

On Node 2:

1. Install SLES 10, making sure to add yast2-iscsi-client, open-iscsi, and HA pattern.

2. Set one NIC to your externally facing IP address, and the second nic to an internal address that will be used by HA. In this example, the hostname is node2, eth0 is 192.168.255.191 (external), and eth1 is 10.0.0.2 (private HA).

3. Finish the installation.

iSCSI Setup

1. Run "mkdir /shared".

2. Go to Yast2 > Network Services > iSCSI Initiator.

3. In the Service tab, set Service Start to 'When Booting'. 'Connected Targets' should be blank.

4. In the Discovered Targets tab, click Discovery.

5. Enter the iSCSI target server's IP address (the default port should be OK). Discovery should find the iSCSI target server's partition.

6. Log in (no authentication). Discovered Targets -> Connected should now read 'true'.

7. Go back to the 'Connected Targets' and 'Toggle Start-Up' to set Start-Up to 'automatic'.

8. Click Finish.

9. Run the command 'dmesg' to show the SCSI device /dev/sdb as available.

10. Mount the iscsi target (/dev/sdb) as /shared (mount -t reiserfs /dev/sdb /shared).

eDirectory and IDM installation

1. Manually 'create' a virtual adapter (ifconfig eth0:0 192.168.255.192). This will be your HA cluster's virtual IP address.

2. Install eDirectory 8.8.x, but do not configure it.

3. Set the eDirectory PATH (. /opt/novell/eDirectory/bin/./ndspath).

4. Move the NICI directory (mv /var/opt/novell/nici /var/opt/novell/nici.old).

5. Copy the NICI information from node1 to node2 (scp -r root@node1:/var/opt/novell/nici /var/opt/novell/).

6. Copy the node1 eDir instances file from node1 to node2 (scp root@node1:/etc/opt/novell/eDirectory/conf/.edir/instances.0 /etc/opt/novell/eDirectory/conf/.edir/).

7. Copy the node1 <nds.conf> file to node2 (scp root@node1:/root/nds.conf /root/).

8. Start eDirectory (ndsmanage startall). If configured correctly, eDirectory should now be up and running on node 2 (ndsstat).

9. Edit /etc/hosts and verify all nodes and iscsi server entries (comment out "127.0.0.2" if it is in the file).

Now you need to install IDM:

1. Install IDM using the CLUSTER_INSTALL option. This will install the IDM files without any interaction with eDirectory (./install.bin -DCLUSTER_INSTALL=true).

2. Restart eDirectory (ndsmanage stopall && ndsmanage startall).

Next, disable ndsd start at boot time:

1. Go to Yast2 > System > System Services (Runlevel).

2. Disable ndsd start at boot (you could also edit the appropriate files in the /etc/rc.d runlevels).

3. Click Finish.

eDirectory and IDM should now be 'installed' on node 2. For consistency, switch back to node 1.

On Node 2:

1. Shut down eDirectory (ndsmanage stopall).

2. Run "umount /shared" (be sure the ndsd process has stopped or /shared may not unmount correctly).

3. Release your ?virtual? ip address (ifconfig eth0:0 down).

On Node 1:

1. Manually create a virtual adapter (ifconfig eth0:0 192.168.255.192). This will be your HA cluster's virtual IP address.

2. Mount the /shared iscsi partition (mount -t reiserfs /dev/sdb /shared).

3. Set the eDirectory PATH (. /opt/novell/eDirectory/bin/./ndspath).

4. Start eDirectory (ndsmanage startall).

High Availability and STONITH Setup

Node 1 Setup

1. Go to Yast2 > System > High Availability.

2. Add node2 to the cluster.

3. Under 'Media Configuration', set Heartbeat Medium to Broadcast, and select your private address network interface (in this case, eth1).

4. Set startup to 'On at boot'.

5. Click Finish.

6. Stop the heartbeat (/etc/init.d/heartbeat stop).

7. Edit /etc/ha.d/ha.cf and add serial device information for "Meatware" STONITH control.

8. Verify the HA broadcast device (eth1 in this case, the private address) and serial device (/dev/ttyS0).

Your ha.cf file should look similar to this:

------------------------ start file snippet /etc/ha.d/ha.cf ------------------------
		#compression_threshold 2
		debugfile /var/log/ha-debug
		logfile /var/log/ha-log
		#logfacility  local0
		keepalive 1
		deadtime 15
		warntime 7
		initdead 120
		baud 19200
		serial /dev/ttyS0
		udpport 694
		bcast eth1
		stonith_host node1 meatware node2
		stonith_host node2 meatware node1
		node node1
		node node2
		auto_failback on
		#ping     xxx.xxx.xxx.xxx
		#respawn    hacluster /usr/lib/heartbeat/ipfail
	
		------------------------ end file snippet ------------------------

9. Create a symbolic link from /etc/init.d/ndsd to the /etc/ha.d/resource.d/ directory (ln -s /etc/init.d/ndsd /etc/ha.d/resource.d/ndsd).

10. Edit /etc/ha.d/haresources and add the entries to configure the virtual cluster ip address and start ndsd.

------------------------ start file snippet /etc/ha.d/haresources------------------------
		node1 \
			IPaddr2::192.168.255.192/24/eth0:0
			ndsd
------------------------ end file snippet ------------------------

11. Create a shell script in /etc/ha.d/resource.d/ to mount the iscsi share when ndsd starts (vi /etc/ha.d/resource.d/mountiscsi).

------------------------ start file snippet /etc/ha.d/resource.d/mountiscsi ------------------------
#!/bin/sh
		mount -t reiserfs /dev/sdb /shared
------------------------ end file snippet ------------------------

12. Run "chmod 775 /etc/ha.d/resource.d/mountiscsi".

13. Edit /etc/init.d/pre_ndsd_start, and add the path to /etc/ha.d/resource.d/mountiscsi.

------------------------ start file snippet /etc/init.d/pre_ndsd_start ------------------------
		/etc/ha.d/resource.d/mountiscsi
------------------------ end file snippet ------------------------

14. Create a shell script in /etc/ha.d/resource.d/ to unmount the iscsi share when ndsd stops (vi /etc/ha.d/resource.d/umountiscsi).

------------------------ start file snippet /etc/ha.d/resource.d/umountiscsi ------------------------
#!/bin/sh
		umount -f /shared
------------------------ end file snippet ------------------------

15. Run "chmod 775 /etc/ha.d/resource.d/umountiscsi".

16. Edit /etc/init.d/post_ndsd_stop, and add the path to /etc/ha.d/resource.d/umountiscsi.

------------------------ start file snippet /etc/init.d/post_ndsd_stop ------------------------
		/etc/ha.d/resource.d/umountiscsi
------------------------ end file snippet ------------------------

17. Start the heartbeat (/etc/init.d/heartbeat start).

Node 2 Setup

1. Go to Yast2 > System > High Availability.

2. Add node1.

3. Under 'Media Configuration', set Heartbeat Medium to Broadcast, and select your private address network interface (in this case, eth1).

4. Set startup to 'On at boot'.

5. Click Finish.

6. Stop the heartbeat if running (/etc/init.d/heartbeat stop).

7. Run "mv /etc/ha.d/ha.cf /etc/ha.d/ha.cf.bak".

8. Copy ha.cf from node1 to node2 (scp root@node1:/etc/ha.d/ha.cf /etc/ha.d/).

9. Copy haresources from node1 to node2 (scp root@node1:/etc/ha.d/haresources /etc/ha.d/).

10. Create a symbolic link from /etc/init.d/ndsd to the /etc/ha.d/resource.d/ directory (ln -s /etc/init.d/ndsd /etc/ha.d/resource.d/ndsd).

11. Copy /etc/ha.d/resource.d/mountiscsi from node1 to node2 (scp root@node1:/etc/ha.d/resource.d/mountiscsi /etc/ha.d/resource.d/).

12. Copy /etc/ha.d/resource.d/umountiscsi from node1 to node2 (scp root@node1:/etc/ha.d/resource.d/umountiscsi /etc/ha.d/resource.d/).

13. Copy pre_ndsd_start from node1 to node2 (scp root@node1:/etc/init.d/pre_ndsd_start /etc/init.d/).

14. Copy post_ndsd_stop from node1 to node2 (scp root@node1:/etc/init.d/post_ndsd_stop /etc/init.d/).

15. Start the heartbeat (/etc/init.d/heartbeat start).

High Availability should now be functioning and available.

Testing eDirectory HA Failover

1. On node 1, run "/etc/init.d/heartbeat standby". You can watch log messages for details: (tail -f /var/log/ha-log on both nodes).

The log messages should show the following things:

  • node1 wants to go on standby
  • node2 verifying node1's standby request
  • node1 shutting down ndsd (if it's running)
  • node1 'releasing' resources
  • node2 creating the virtual ip
  • node2 issuing ndsd start

2. Run "/etc/init.d/heartbeat standby" on node2 to 'fail over' back to node1.

Now you can import/configure any IDM drivers that you want to run in the HA environment. Set the drivers to 'auto start' so they will start when eDirectory starts on HA startup.

Conclusion

This scenario can make a good use case for commercial or value-added clustering solutions, or can stand on its own as an excellent proof-of-concept for high availability of eDirectory and IDM. It is just another example of how open source and open standards can save you time and and money.


Novell Cool Solutions (corporate web communities) are produced by WebWise Solutions. www.webwiseone.com

© 2014 Novell