The DHCP server should be installed on all the nodes in cluster or on the nodes identified for running DHCP.
NOTE:In a cluster setup, if you need to create a user other than the default user that is created, Runtime Credentials must be set on all the nodes in the DHCP cluster. For more information, see Section 6.3, Setting Runtime Credentials.
Create a shared NSS pool and volume for the DHCP server and cluster-enable the shared pool. You will configure the cluster resource later for DHCP services. For information, see OES 11 SP3: Novell Cluster Services for Linux Administration Guide. You need a unique, static IP address in the same IP subnet as the cluster to assign as the IP address for this DHCP cluster resource.
To ensure that the Novell Cluster Services is set up properly:
Log in to iManager.
In Roles and Tasks, select, then select the cluster.
If the cluster does not appear in your personalized list of clusters to manage, you can add it. Click, browse and select the cluster, then click . Wait for the cluster to appear in the list and report its status, then select the cluster.
Select the check box next to the Cluster resource object that you created for the shared NSS pool, then click the Details link.
Click thetab to view a list of the nodes that are assigned as the preferred nodes for failover and migration.
After executing these steps, you can mount the shared volume on the preferred nodes by using the Novell Client. The shared volume is mounted on the preferred node so that the directories and lease files are created. This process also assigns rights to the shared volume.
Ensure that association between the DHCP Server object and the DHCP Service object is set by using Java Console.
The DHCP server by default uses the dhcpd user that is created in the local system during installation process. If you want to use another user, create the user by using the option in YaST.
After creating the user, update /etc/sysconfig/dhcpd file, then set the value of the variable DHCPD_RUN_AS to the new user.
Click thetask in iManager to open the Create User window. Specify the details and click to create user dhcpd or the new user in eDirectory.
The user created in Step 4 needs to be LUM-enabled. To do this, click the task. This opens the Enable Users for Linux window. Search for and select the user created in Step 4, then click to select the user.
Make sure that every user belongs to a primary group. To add a user to a group, search for an.
Select the DHCPGroup object from the list.
Select the workstations to which the Linux-enabled user should have access.
Clickto confirm the selection.
The user is now Linux-enabled, included in the DHCP Group, and granted access to cluster nodes.
Update the UID of the user created above to the dhcpd user’s default UID. Selecttask in iManager. Select the user, go to Linux Profile tab of the user and Modify User ID to the dhcpd userʹs default UID.
Mount the shared volume on one of the nodes in the cluster.
Execute the following command at the command prompt:
/opt/novell/dhcp/bin/ncs_dir.sh <MountPath> <FQDN of Username with tree-name>
The MountPath parameter indicates the target directory in the volume where DHCP-specific directories are created.
For example, /opt/novell/dhcp/bin/ncs_dir.sh /media/nss/DHCPVOL/ cn=dhcpd.o=novell.T=MyTree;
When the script is executed, it creates the following folders:
The script also takes care of assigning permissions for these directories.
Copy the /etc/dhcpd.conf file to /media/nss/DHCPVOL/etc directory and modify the LDAP attributes as required.
For example, ldap-server "192.168.0.1"; ldap-dhcp-server-cn "DHCP_acme";
Set the ldap-server attribute with the shared NSS pool IP Address.
Set the ldap-dhcp-server-cn attribute with the name of the DHCP server object that you want to use.
To hardlink, enable the shared volume on which the dhcpd.conf and dhcpd.leases files are hosted eg.DHCPVOL.
Invoke nsscon in the linux terminal and execute the following commands:
To ensure that hard links are enabled, execute the following commands in the shared volume:
touch testfile.txt ln testfile.txt testlink.txt unlink testlink.txt rm testfile.txt
If the hard link was successfully enabled, these commands execute without errors.
Open a terminal on the node where the shared volume is mounted and execute the following command at the prompt:
dhcpd -cf /media/nss/DHCPVOL/etc/dhcpd.conf -lf /media/nss/DHCPVOL/var/lib/dhcp/db/dhcpd.leases
This step ensures that the DHCP server can work on a cluster setup with shared volumes.
Stop the server by executing the following command at the prompt: killproc -p /var/lib/dhcp/var/run/dhcpd.pid -TERM /usr/sbin/dhcpd
In iManager, select Clusters > My Cluster, select the cluster, then select the Cluster Options tab.
Select the DHCP Cluster resource that was created as part of Prerequisites and click . The Cluster Pool Properties are displayed. Click the tab. You can now view or edit the load or unload scripts.
If you modify a script, clickto save your changes before you leave the page. Changes do not take effect until you take the resource offline, and bring it online again.
Ensure that the DHCP load script is same as specified in DHCP Load Script. Click if you make changes.
Ensure that the DHCP unload script is same as specified in DHCP Unload Script. Click if you make changes.
Ensure that the DHCP monitor script is the same as specified in Configuring the DHCP Monitor Script. Click if you make changes.
Clickto save the changes.
Set the DHCP resource online. Select the Cluster Manager tab, select the check box next to the DHCP resource, then click.
The load script contains commands to start the DHCP service.The load script appears similar to the following example:
#!/bin/bash . /opt/novell/ncs/lib/ncsfuncs exit_on_error nss /poolact=DHCPPOOL exit_on_error ncpcon mount DHCPVOL=254 exit_on_error add_secondary_ipaddress 10.10.2.1 exit_on_error ncpcon bind --ncpservername=DHCPCLUSTER-DHCPPOOL-SERVER --ipaddress=10.10.2.1 exit 0
Add the following line to the script before exit 0 to load DHCP:
exit_on_error /opt/novell/dhcp/bin/cluster_dhcpd.sh -m <MOUNT_POINT>
For example: MOUNT_POINT= /media/nss/DHCPVOL
Clickand continue with the unload script configuration.
The unload script contains commands to stop the DHCP service. The unload script appears similar to the following example:
#!/bin/bash . /opt/novell/ncs/lib/ncsfuncs ignore_error ncpcon unbind --ncpservername=DHCPCLUSTER-DHCPPOOL-SERVER --ipaddress=10.10.2.1 ignore_error del_secondary_ipaddress 10.10.2.1 ignore_error nss /pooldeact=DHCPPOOL exit 0
Add the following line after the./opt/novell/ncs/lib/ncsfuncs statement:
ignore_error killproc -p /var/lib/dhcp/var/run/dhcpd.pid -TERM /usr/sbin/dhcpd
The path for the dhcpd.pid file changed between OES 11 and OES 11 SP1 or later. In OES 11, the DHCP process ID is located in /var/run/dhcpd.pid. In OES 11 SP1 and later versions, the DHCP process ID is located in /var/lib/dhcp/var/run/dhcpd.pid. During a cluster upgrade from OES 11 to OES 11 SP1 and later, you must change the path for dhcpd.pid. For more information, see Changing the Path for dhcpd.pid“.
During a cluster upgrade from OES 11 to OES 11 SP1 and later versions, you must modify the location of the dhcpd.pid file in the unload script from /var/run/dhcpd.pid to /var/lib/dhcp/var/run/dhcpd.pid. After you modify the script, you should bring the resource online only on OES 11 SP1 and later nodes.
In your OES cluster, upgrade one or more nodes to OES 11 SP1 and later.
At least one of the upgraded nodes should appear in the DHCP resource's preferred nodes list. If it is not, you can modify the resource's preferred nodes list. For information about how to set preferred nodes, see OES 11 SP3: Novell Cluster Services for Linux Administration Guide“.
Cluster migrate the DHCP resource to an OES 11 SP1 and later node in its preferred nodes list:
Log in as the root user to the OES node where the resource is running, then open a terminal console.
At the command prompt, enter
cluster migrate <dhcp_resource_name> <oes11sp1_node_name>
The DHCP resource goes offline on the OES node and comes online on the specified OES 11 SP1 and later node.
Log in to iManager, click, select the cluster, then click the tab.
On the Cluster Manager tab, select the check box next to the DHCP resource, then click.
At a command prompt on the OES 11 SP1 and later cluster node, manually stop the DHCP process by entering:
killproc -p /var/lib/dhcp/var/run/dhcpd.pid -TERM /usr/sbin/dhcpd
You must do this because the path in the old unload script is different from the path in OES 11 SP1 and later versions.
In iManager, click thetab, then click the DHCP resource link to open its Properties page.
Modify the path for the dhcpd.pid file in the unload script for the DHCP resource:
Click thetab, then click .
Look for the following line in the DHCP unload script from OES 11:
ignore_error killproc -p /var/run/dhcpd.pid -TERM /usr/sbin/dhcpd
Change it to the following for OES 11 SP1 and later versions:
ignore_error killproc -p /var/lib/dhcp/var/run/dhcpd.pid -TERM /usr/sbin/dhcpd
Clickto save the script changes.
Click thetab, remove the OES 11 nodes from the list, then click .
After the unload script change, you want the DHCP resource to fail over only to OES 11 SP1 and later nodes. This is necessary to ensure a graceful shutdown of the dhcpd.pid when the DHCP resource fails over to a different node. For information about how to set preferred nodes, see OES 11 SP3: Novell Cluster Services for Linux Administration Guide.
Clickto save your changes and close the resource's Properties page.
Bring the DHCP resource online again. Click thetab, select the check box next to the DHCP resource, then click .
The resource will come online on the OES 11 SP1 and later node that is listed as its most preferred node if the node is available.
The monitor script contains commands to monitor the DHCP service. The monitor script appears similar to the following example:
#!/bin/bash . /opt/novell/ncs/lib/ncsfuncs exit_on_error status_fs /dev/pool/POOL1 /opt/novell/nss/mnt/.pools/DHCPPOOL nsspool exit_on_error status_secondary_ipaddress 10.10.2.1 exit_on_error ncpcon volume DHCPVOL exit 0
Add the following before exit 0 :
rcnovell-dhcpd status if test $? != 0; then exit_on_error /opt/novell/dhcp/bin/cluster_dhcpd.sh -m <MOUNT_POINT> fi exit_on_error rcnovell-dhcpd status