To get the full benefit of using Novell Native File Access for UNIX with Novell Cluster ServicesTM, the software must be installed and configured to work in a cluster environment.
This section describes the following:
Before installing Native File Access for UNIX with cluster support, create at least one shared pool and at least one volume in that pool.
Create the directory SYS:\NFSBACK.
Make a backup of the configuration files.
Create a new sharable NSS partition.
In this partition, create a pool. Enter a name for the virtual server and the IP Address in the pop box displayed. Do not use nfsclust as this is a reserved word.
NOTE: Consoleone creates a virtual server as clust-obj-name given_server and also cluster volume resource object with name given_server in the cluster object.
Create a Cluster Pool and its object from ConsoleOne. To do this:
From the Tools menu > select Disk Management > NSS Pools.
Identify the tree / context / server. Click OK.
Click New to create a new pool. Give it a name. Click Next.
Select the storage device to be used. Adjust the used space as needed Click Next.
The pool attribute information appears. Make sure that the Cluster Enable on Creation box is checked. The Virtual Server name is built automatically.
Enter the IP address to be assigned to this virtual server (used for this shared pool).
Add any additional desired Advertising Protocols, then click Finish.
From the Media tab, select NSS Logical Volumes.
Click new and create at least one volume within the pool.
IMPORTANT: Instead of a pool object in the normal format <servername>_<poolname>_POOL, it will be named <clustername>_<poolname>_POOL. A virtual server object associated with the shared pool will be created, called <clustername>_<poolname>_SERVER. ConsoleOne also creates a Cluster Pool resource object called <poolname>_SERVER, inside the Cluster Container object.
For example, given a cluster named NFSC, shared pool named NFSP, and volume named VOL1, the objects seen would be:
Cluster container object: NFSC
Pool Object: NFSC_NFSP_POOL
Virtual Server Object: NFSC_NFSP_SERVER
Volume Object: NFSC_VOL1
Cluster pool resource object within cluster container: NFSP_SERVER
Install Native File Access For UNIX on all the nodes in the cluster.
On each node of the cluster, if the NFS Services are running, run NFSSTOP.Unload NFSADMIN and PKERNEL.Remove NFSSTARTfrom AUTOEXEC.NCF.
Delete all the NISSERV_servername objects in eDirectory.
To cluster enable and upgrade the configuration, use SPINST.
To cluster enable for the first time: execute the following command on all nodes, one by one. Make sure to have the shared volume residing on the node at the time you run the command:
spinst -o 2 -v SHARE_VOL_NAME: -n RES_NAME -i RES_IP
Using the example names given in the prerequisites section, and assuming the address 10.2.3.4 is assigned to the shared pool, the command would be:
spinst -o 2 -v VOL1: -n NFSP_SERVER -i 10.2.3.4
To upgrade from a previously cluster enabled setup: execute the following command on all nodes, one by one. Make sure to have the shared volume residing on the node at the time you run the command):
spinst -o 3 -v SHARE_VOL_NAME: -n RES_NAME -i RES_IP
In the command, you need to specify the shared volume name for -v, the resource name for -n and the resource IP address for -i.
Using the example names given in the prerequisites section, the command would be:
spinst -o 3 -v VOL1: -n NFSP_SERVER -i 10.2.3.4
Create an ETC directory on the shared volume. Copy the following files to shared_volume:\ETC\ :
sys:\etc\nis.cfg
sys:\etc\nfs.cfg
sys:\etc\nfsserv.cfg
sys:\system\nfsstart.ncf
sys:\system\nfsstop.ncf
Within the Cluster contain object (Console view), right-click the Cluster Pool resource object and then click Properties. Select the Scripts tab to find the Cluster Resource Load Script and Cluster Resource Unload Script. Following are the formats for these scripts.
To the load script, add the following at the end of the existing script:
nfsclust AAA.BBB.CCC.DDD shared_vol_name shared_pool_name_SERVER
shared_vol_name:\ETC\NFSSTART
For the example names used in this document, the specific commands would be:
nfsclust 10.2.3.4 VOL1 NFSP_SERVER
VOL1:\ETC\NFSSTART
To the unload script, add the following at the beginning of the existing script:
shared_vol_name:\ETC\NFSSTOP
#(VOl1:\ETC\NFSSTOP, for our example)
unload nfsclust
unload nfsadmin
delay 2
unload pkernel
NOTE: A small delay might be needed before PKERNEL can unload, to allow dependant modules to finish unloading first. If the unload pkernel command fails, the pool may go comatose rather than migrate successfully. The delay command serves this purpose.
The following table explains the different resource modes.
To view or change the Start, Failover, and Failback modes, do the following:
In ConsoleOne, double-click the cluster object container.
Right-click the cluster resource object shared vol name_SERVER and select Properties.
Click the Policies tab on the property page.
View or change the Start, Failover, or Failback mode.
The procedure to configure the components of Native File Access for UNIX is much the same as when you configure the components without cluster services. However, some points must be kept in mind while configuring the following components:
For the location of the configuration files for Native File Access for UNIX with and without Cluster Services, see Location of Configuration Files .
While configuring the NFS Server, note the following:
For more information on configuring the NFS Server, see ConsoleOne-Based Management for NFS Server.
While configuring the NIS clients, note the following:
Most of the configuration files are now located in the shared volume's ETC directory. The following table lists the location with and without the cluster services.
Table 3. Location of Configuration Files
To start NFS Services, from Cluster ConsoleOne, click Cluster Object > View > Cluster State > Cluster Vol Object Online.
To stop NFS Services, from ConsoleOne, click Cluster Object > View > Cluster State > Cluster Vol Object Offline.
For additional information on setting up and configuring Novell Cluster Services, see the Novell Cluster Services Overview and Installation Guide.