Novell Cool Solutions

Making POSIX NCP Volumes Ready For Novell Cluster Services



By:

May 18, 2009 11:45 am

Reads:5,032

Comments:1

Score:Unrated

Print/PDF

Aim of this AppNote
Requirements
Assumptions
Details of set up used for this AppNote:
Configuration Steps:
1. Creation of Linux POSIX Volume on the shared disk using EVMS

     1.1 Remove the Existing Formatting and Segment Managers
     1.2 Create Cluster Segment Manager (CSM) Container with Cluster Segment Manager container plugin
     1.3 Add a Non-CSM Segment Manager Container
     1.4 Create an EVMS Volume
     1.5 Make a File System on the EVMS Volume
2. Cluster-enable the Linux POSIX Volume created in step 1
3. Creating NCP volume(s) on the Linux POSIX Volume on the shared disk
4. Creating a Virtual NCP Server Object for NCP Volumes
5. Prepare the NCP volume for cluster
6. Demo: Verify the functionality of cluster on the cluster-enabled POSIX NCP Volume

Aim of this AppNote

This AppNote aims to provide the step-by-step approach to configure a NCP (NetWare Core Protocol) volume over POSIX file systems (ReiserFS and Ext3). By default NSS (Novell Storage Services) volumes, as soon as they are created, are mounted as NCP volumes. And when a NSS volume is created on a shared and clustered enabled pool, by default it is clustered enabled and ready for fail-over or migration between the servers (nodes) of the cluster. So dealing with NCP volumes created over NSS is quite simple. However, this is not the case when we deal with the NCP volumes created over POSIX file system. Many modifications and preparations are required and hence more prone to error and mistakes. We need a careful approach and error free and simplified steps to meet our goal. This is exactly what this article will provide. At the end of this article, a short demo on how to verify the configuration is also provided for better understanding and clarity.

Requirements

  1. Each node in the cluster must be running OES2 Linux
  2. NCS (Novell Cluster Services) is installed, configured and running in each node/server of the cluster before configuring the shared disk/partition.
  3. NCP is installed in each server/node of the cluster, as NCP is not cluster aware.
  4. iManager 2.7 and NRM (Novell Remote Manager) for Linux must be installed; iManager to configure cluster resources, load scripts, and unload scripts and NRM to configure and manage the NCP share/volume.

    Note: The words “share” and “volume” are used interchangeably

  5. The NCP volume must be created /resided on a shareable device between the nodes/servers of the cluster and the device (disk or LUN) must be managed by EVMS (Enterprise Volume Management System) and have a Cluster Segment Manager (CSM) on that device. Hence EVMS must be installed and running on each server in the cluster
  6. No creation or modifications to EVMS objects should be done unless NCS is running

Assumptions

  1. All the requirements (1 to 4) mentioned above are met.
  2. Set up is ready; cluster (NCS) is already installed and running as mentioned in next section “Details of set up used for this AppNote” below.

Details of set up used for this AppNote:

  1. Cluster name: clusterC and master IP: 164.99.103.193
  2. Two nodes of this cluster are wgp-dt81 (164.99.103.81), wgp-dt82 (164.99.103.82).
  3. Both the nodes connect to two iSCSI targets. (Note: Here I am using iSCSI targets as shared device).
    1. 1st iSCSI target (shown as device “sde” of size (2GB) in node wgp-dt81): It is used for setting up the cluster, clusterC; this means that it is used for storing clusterC specific data, like SBD partition.
    2. 2nd iSCSI target (shown as device “sdf” of size (4GB) in node wgp-dt81): It is the raw partition/disk, where I am going to create a Linux POSIX file systems and EVMS volume and NCP volume. Hence we will be focussing only on device “sdf” from hereon. Note: sdf is not yet initialized. Before start using, we need to initialize it as it is done in step 1.1(i,ii) below. However we can initialize using NSSMU tool also but I have not done yet as it will be taken care later in step 1.1(i,ii).

Configuration Steps:

The entire configuration can be sub divided into following steps.

  1. Creation of Linux POSIX Volume on the shared disk using EVMS.
  2. Cluster enabling the Linux POSIX Volume created in step 1.
  3. Creating NCP volume(s) on the POSIX File System.
  4. Prepare the NCP volume for cluster enabling.
  5. And, finally verify the functionality of cluster enabled NCP volume.

Each of the steps involved multiple sub steps. Each one of them will be discussed and shown below.

Assuming that all the requirements mentioned above are met, and keeping the set up details in mind, let us proceed with step 1 itself.

Note:
  • POSIX File system “reiserfs” is used for this article.
  • All the configurations are done using first node, wgp-dt81’s NRM and iManager unless and otherwise mentioned.

1. Creation of Linux POSIX Volume on the shared disk using EVMS

The EVMS provides Cluster Segment Manager (CSM). CSM will help to prevent data corruption caused by multiple nodes accessing the same data. So, in cluster environment, EVMS is an ideal option to create volumes and file systems on the shared storage/device. So let use EVMS to create Linux POSIX volume on our shared device “sdf” as shown below.

1.1. Remove the Existing Formatting and Segment Managers

The EVMS CSM (Cluster Segment Manager) must be the first segment manager laid down on the space that we want to use for the shared Linux volume. Hence we must remove all other segment managers, if any so that the CSM is the first one. Let us do this from the first node wgp-dt81 of clusterC.

  1. Log in as root user in wgp-dt81 and type “evmsgui” in the console to launch EVMS Administration Utility.
  2. As mentioned before sdf is not initialized yet and hence the above warning message pops up. Click Yes to initialize to get the main administration page “EVMS Administration Utility”
    1. Click the Disks tab, then right click on the “sdf” and select “Remove segment manager from Object”.

    2. Select the listed non-CSM segment manager, NWSegMgr here, and click Remove.

    3. Then click OK on Operation Results pop up message.

    4. Click Save on the menu bar and the click Save again on the pop up message to save our changes

    5. Repeat step a) to step d) until the “Remove segment manager from Object” option is no longer available when you right-click the device, sdf. After deleting all the segment we will see all other options as follows:

1.2. Create Cluster Segment Manager (CSM) Container with Cluster Segment Manager container plugin

  1. In EVMS GUI, click Actions > Create > Container and get the Create Storage Container page.

  2. On the Create Storage Container page, select Cluster Segment Manager and click Next to get Select Plugin Acceptable Objects page

  3. On the Select Plugin Acceptable Objects page, select the disks (storage objects) “sdf”, then click Next to get the Configuration Options page. Note that this is the disk we are going to place inside the container.

  4. On the Configuration Options page, select the cluster node, wgp-dt81 where you are creating the container, specify Private as the type, then specify name CSMcontainerForClusterC for the container and click Create.
    Note: The name must be one word but should not be any of the following reserved words: Container, Disk, EVMS, Plugin, Region, Segment, and Volume.

  5. Click OK to close the popup “Operation Completed Successfully” window.
  6. Click Save on the menu bar. Click Save again on the pop up message to save all the changes.
  7. Verify that you can see this container in “Containers” tab.

1.3. Add a Non-CSM Segment Manager Container

Now we can add a non-CSM segment manager container on top of the CSM container, CSMcontainerForClusterC we have created in step 1.2 above. Let us add a DOS Segment Manager as shown below.

  1. In evmsgui, click Actions> Add > Segment Manager to Storage Object to get Add Segment Manager to Storage Object page.

  2. On the Add Segment Manager to Storage Object page, choose DOS Segment Manager as we want to add this, and click Next.

  3. On the Select Plugin Acceptable Objects page, choose the CSM container storage object, CSMcontainerForClusterC, as it is the container we need to add the segment manager on, and then click Next.

  4. On the Configurable Options page, select the default disk type Linux, click Add, then click OK on the pop up operation completed successfully message window.

  5. Click Save on the menu bar. Click Save again on the pop up message to save your changes
  6. Since we created a DOS Segment Manager, we need to create a segment for it. Lets do this.
    1. On evmsgui, click Actions > Create > Segment.

    2. On the Create Disk Segment page, select DOS Segment Manager, then click Next.

    3. Just select the free space in our container, CSMcontainerForClusterC/sdf.freespace1. No need to click next. It will directly take us to next page

    4. Click Next.

    5. Specify the Size of the segment (I have taken full disk space), the Partition Type as Linux, click Create, and then click OK.

    6. Click OK on the pop up Operation Completed Successfully window.
    7. Click Save in the menu bar, and then click Save again on the pop up message to save all our changes.

1.4. Create an EVMS Volume

  1. Open evmsgui.

    Click Actions > Create > EVMS Volume.

  2. On the Create EVMS Volume page, select the container storage object, CSMcontainerForClusterC/sdf1 we had just created and specify a name, ReiserFsVolume here, and click Create.

  3. Click OK on pop up successful operation result window.
  4. Click Save in the menu bar, then click Save again on the pop up message to save all the changes.
  5. Click the Volumes tab to verify that the EVMS volume, ReiserFsVolume, which will be listed along with full path as /dev/evms/CSMcontainerForClusterC/ReiserFsVolume.

1.5. Make a File System on the EVMS Volume

  1. In evmsgui, click the Disks tab, then activate the CSM container CSMcontainerForClusterC/sdf
    1. On the Disks page, right-click the CSM container, CSMcontainerForClusterC/sdf, then select Activate.

    2. On the Activate page, select the CSM container CSMcontainerForClusterC/sdf and click Activate.

    3. Click OK on the successful Operation Results page.
    4. Click Save on the menu bar and, then click Save again on the pop up message to save your changes. Click OK on the Operation Results page.
  2. Click the Volumes tab, then activate the EVMS volume:
    1. On the Volumes page, right-click the EVMS volume, ReiserFSVolume and select Activate.

    2. On the Activate page, select the volume, ReiserFSVolume and click Activate.

    3. Click OK on the successful Operation Results window.
    4. Click Save on the menu bar and, then click Save again on the pop up message to save your changes.
  3. Make the file system on the EVMS volume:
    1. On the Volumes page, right-click the ReiserFSVolume volume, then select Make File System

    2. On the Make File System page, choose Linux POSIX file system interface module, ReiserFS File System Module from the list and click Next.

    3. Specify a volume label (VOL1, can be any name) and click Make.

    4. Click OK on the pop up successful Operation Results window.
    5. Click Save on the menu bar and, then click Save again on the pop up message to save all the changes.
    6. The file system type is now listed under the Plugin column.

  4. Mount the volume:
    1. On the Volumes page, right-click the volume, ReiserFSVolume, then select Mount.

    2. On the Mount File System page, select the volume, and then specify the Linux path to use for the mount point. I have selected /ncptest/ncpvolumes/ as the mount point for this EVMS volume and click Mount.

    3. Click OK on the successful Operation Results window.
    4. Now verify that the mount point now appears in the Mount Point column in the Volumes tab.

This completes the step 1: Creation of Linux POSIX File system and Volume on the shared disk using EVMS

2. Cluster-enable the Linux POSIX Volume created in step 1

To cluster-enable the Linux POSIX volume, we need to create a cluster resource for it through which we can control the behavior of the EVMS container and its volumes.

EVMS container is the unit of fail over and hence all the volumes in a container also fail over. However only the volumes that are mounted in the Load script of this resource, are automatically mounted on failover. Other volumes in the container that are not mounted through the Load script need to be manually mounted.

So let us create Cluster Resource for a Linux POSIX Volume and modify its associated scripts (load script, unload script and monitor script) and define its policies. Continue with next step.

  1. In iManager, click Clusters>Cluster Options.
  2. Browse and select the Cluster object “clusterC”, the cluster where we want to add a cluster resource.

    Click New to get the New Resource Page

  3. Select Resource as the resource type and click Next.

  4. Specify a name, can be any name, of the resource. Here I have put the name as clusterC_RFSVOL1_SERVER. In the Inherit From Template field, browse to the container where Cluster container object resides, then locate and select the Generic_FS_Template in that container.

    Deselect Online Resource after Create as we need to configure its scripts before online it for the first time. Select the Define Additional Properties and click Next to get the Load Script page.

    Note/Tips: From here on, our task for the rest of step 2 is to update/modify the load, unload, and monitor scripts. The key to modify/update these scripts (load, unload, monitor) are to just update the default values of the parameters (RESOURCE_IP, container_name, MOUNT_DEV, MOUNT_POINT). Here, we will be updating their values as shown below, as per our configurations in step 1, in all the scripts wherever they appear.
      <Parameters  >      <values>
     RESOURCE_IP=164.99.103.89
    container_name= CSMcontainerForClusterC
    MOUNT_DEV=/dev/evms/$container_name/ReiserFSVolume
    MOUNT_POINT=/ncptest/ncpvolumes
  5. On the Load Script page, edit or add the necessary commands to the script as shown below, to load the resource on the server.

    The default/inherited Load script is given below:

    #! /bin/bash
    . /opt/novell/ncs/lib/ncsfuncs
    # define the IP address
    RESOURCE_IP=a.b.c.d  < -----this will be updated to "164.99.103.89" a unique IP.
    # define the file system type
    MOUNT_FS=reiserfs
    #define the container name
    container_name=name  < --- This will be updated to "CSMcontainerForClusterC"
    # define the device
    MOUNT_DEV=/dev/evms/$container_name/volume_name < --- This will be updated to "=/dev/evms/$container_name/ReiserFSVolume"
    # define the mount point
    MOUNT_POINT=/mnt/mount_point < --- This will be updated to "/ncptest/ncpvolumes"
    # activate the container
    exit_on_error activate_evms_container $container_name $MOUNT_DEV $NCS_TIMEOUT
    # mount the file system
    exit_on_error mount_fs $MOUNT_DEV $MOUNT_POINT $MOUNT_FS
    # add the IP address
    exit_on_error add_secondary_ipaddress $RESOURCE_IP
    exit 0
    
    
    Note: If the mount point defined by the variable $MOUNT_POINT does not exist on other nodes, we can add the line ” ignore_error mkdir -p $MOUNT_POINT” in the script just before the line “exit_on_error mount_fs $MOUNT_DEV $MOUNT_POINT $MOUNT_FS”, command to mount the file system: So lets us add this line (italicized line) also in the updated Load script shown below.

    So below is the updated Load script:

    #!/bin/bash
    . /opt/novell/ncs/lib/ncsfuncs
    
    # define the IP address
    RESOURCE_IP=164.99.103.89
    # define the file system type
    MOUNT_FS=reiserfs
    #define the container name
    container_name=CSMcontainerForClusterC
    # define the device
    MOUNT_DEV=/dev/evms/$container_name/ ReiserFSVolume
    # define the mount point
    MOUNT_POINT=/ncptest/ncpvolumes
    
    #activate the container
    exit_on_error activate_evms_container $container_name $MOUNT_DEV $NCS_TIMEOUT
    
    #mount the file system
    ignore_error mkdir -p $MOUNT_POINT"
    
    # mount the file system
    exit_on_error mount_fs $MOUNT_DEV $MOUNT_POINT $MOUNT_FS
    
    # add the IP address
    exit_on_error add_secondary_ipaddress $RESOURCE_IP
    exit 0
    
    
  6. Specify the Load Script Timeout value. I leave this to default value of 1 minute. And click Next to move to Unload Script section.
  7. On the Unload Script page, edit or add the necessary commands to the script to unload or stop the resource on the server.

    The default/inherited Unload script is given below:

    #!/bin/bash
    . /opt/novell/ncs/lib/ncsfuncs
    # define the IP address
    RESOURCE_IP=a.b.c.d < -----this will be updated to "164.99.103.89" a unique IP.
    # define the file system type
    MOUNT_FS=reiserfs
    #define the container name
    container_name=name< --- This will be updated to "CSMcontainerForClusterC"
    # define the device
    MOUNT_DEV=/dev/evms/$container_name/volume_name< --- This will be updated to "=/dev/evms/$container_name/ReiserFSVolume"
    
    # define the mount point
    MOUNT_POINT=/mnt/mount_point< --- This will be updated to "/ncptest/ncpvolumes"
    
    # Unmount the volume
    sleep 10 # if not using SMS for backup, please comment out this line
    exit_on_error umount_fs $MOUNT_DEV $MOUNT_POINT $MOUNT_FS
    # delete the IP address
    ignore_error del_secondary_ipaddress $RESOURCE_IP
    # deactivate the container
    exit_on_error deactivate_evms_container $container_name $NCS_TIMEOUT
    # return status
    exit 0 
    
    

    So below is the updated Unload script:

    #!/bin/bash
    . /opt/novell/ncs/lib/ncsfuncs
    
    # define the IP address
    RESOURCE_IP=164.99.103.89
    # define the file system type
    MOUNT_FS=reiserfs
    #define the container name
    container_name=CSMcontainerForClusterC
    # define the device
    MOUNT_DEV=/dev/evms/$container_name/ ReiserFSVolume
    # define the mount point
    MOUNT_POINT=/ncptest/ncpvolumes
    
    # unmount the volume
    sleep 10 # if not using SMS for backup, please comment out this line
    exit_on_error umount_fs $MOUNT_DEV $MOUNT_POINT $MOUNT_FS
    
    # del the IP address
    ignore_error del_secondary_ipaddress $RESOURCE_IP
    
    #deactivate the container
    exit_on_error deactivate_evms_container $container_name $NCS_TIMEOUT
    
    # return status
    exit 0
    
    
  8. Specify the value of the Unload Script Timeout value. I left this value to default value of 1 minute. And then click Next to move to Monitor Script section.
  9. On the Monitor Scripts page, edit or add the necessary commands to the script to monitor resource on the server. We need to enable this feature manually if needed. By default it is disabled.

    The default/inherited Monitor script is given below:

    #!/bin/bash
    . /opt/novell/ncs/lib/ncsfuncs
    # define the IP address
    RESOURCE_IP=a.b.c.d < -----this will be updated to "164.99.103.89"
    # define the file system type
    MOUNT_FS=reiserfs
    #define the container name
    container_name=name
    # define the device
    MOUNT_DEV=/dev/evms/$container_name/volume_name < --- This will be updated to "=/dev/evms/$container_name/ReiserFSVolume"
    
    # define the mount point
    MOUNT_POINT=/mnt/mount_point < --- This will be updated to "/ncptest/ncpvolumes"
    
    # test the file system
    exit_on_error status_fs $MOUNT_DEV $MOUNT_POINT $MOUNT_FS
    # status the IP address
    exit_on_ error status_secondary_ipaddress $RESOURCE_IP
    exit 0
    
    

    So below is the updated Monitor script:

    #!/bin/bash
    . /opt/novell/ncs/lib/ncsfuncs
    
    # define the IP address
    RESOURCE_IP=164.99.103.89
    # define the file system type
    MOUNT_FS=reiserfs
    #define the container name
    container_name=CSMcontainerForClusterC
    # define the device
    MOUNT_DEV=/dev/evms/$container_name/volume_name
    # define the mount point
    MOUNT_POINT=/ncptest/ncpvolumes
    
    # test the file system
    exit_on_error status_fs $MOUNT_DEV $MOUNT_POINT $MOUNT_FS
    
    # status the IP address
    exit_on_error status_secondary_ipaddress $RESOURCE_IP
    
    exit 0
    
    
  10. Now specify the Monitor Script Timeout value, and then click Next to move to Resource Policies section.
  11. I have used default setting: Modify the default policy setting for the resource if require. Click Next.

  12. Click Finish to complete the configuration.

  13. Now we can see the resource with the status “Offline” in Cluster Manager page of clusterC (screenshot below)

This completes the step 2: Cluster-enable the Linux POSIX Volume

3. Creating NCP volume(s) on the Linux POSIX Volume on the shared disk

On the first node in the cluster, you must create the NCP volume once in order to create its Volume object in Novell eDirectory

  1. Login to NRM https://164.99.103.81:8009
  2. In NRM, click Manage NCP Services > Manage Shares

    Now click Create New Share button.

  3. In Volume Name, type the name of the NCP volume: RfsVol1 (all will be converted to all CAPS, see below) and fill in the path “/ncptest/ncpvolumes/vol1″ and check “Create if not present” as the path is refers to a new directory inside /ncptest/ncpvolumes/. With this option a directory “vol1″ will be created under “/ncptest/ncpvolumes”. Leave other fields to default: No Shadow Paths, as we are not dealing with DST (Dynamic Storage Technology) at this point, Attributes and then click OK.

  4. Click OK to confirm the creation of the NCP volume (share). This will take you to the NCP share page.

  5. Verify that the share, RFSVOL1 (all CAPS) is created successfully in NCP shares page (screenshot below).

  6. Click on the info icon (i) of RFSVOL1 and verify that the File system of this share is reiserfs (screenshot below)

4. Creating a Virtual NCP Server Object for NCP Volumes

In NSS, when an NSS pool is cluster-enabled, a virtual NCP Server object with the name of the format “<ClusterObjectName>_<PoolName>_server” is automatically created in eDirectory .For example, if the cluster name is clusterA and the cluster-enabled pool name is PoolA, then the default virtual server name will be clusterA_PoolA_server. However, in case of NCP volumes over POSIX file system, this object is not created automatically and we need to create it manually. This object makes NCP clients easy to access or browse all the NCP volumes running in it.

So let us create a NCP Server object in eDirectory with the name we want to use for the cluster resource. Though we can give any name, we will use the same naming convention as mentioned above.

  1. In iManager, select Directory Administration > Create Object Then check Show all object classes and then select NCP Server from the list of Available Object Classes:, then click OK

  2. Specify the Context where this NCP server object will reside. I have selected SITE3 where clusterC object resides and give the name as NCP_RFSVOL1_SERVER and then click OK.

  3. Click OK to close the page.

  4. Now verify that the NCP server object is created in SITE3. To view the objects click on “View Objects” icon in iManager and then click on the context where object will reside. It is SITE3 here.

  5. Note: Once we create this NCP server object, we need to associate/bind the cluster resources that will be running in it. To do this we need to modify the Load script for the Linux POSIX volume cluster resource by adding a line that contains the virtual NCP server name. Here we will be adding the line “exit_on_error ncpcon bind –ncpservername= NCP_RFSVOL1_SERVER –ipaddress=164.99.103.89″ in the Load script of the cluster resource we created in step 2. We will see this line in the next section, step 5. Continue with step 5.

5. Prepare the NCP volume for cluster

In the cluster environment, mounting and dismounting of the NCP shares should be controlled only in the Load and Unload scripts of the resource. To make this happen we need to dismount the NCP share, make the mount point (/ncptest/ncpvolumes/vol1) exist in all the server of cluster and remove the line “VOLUME <VolumeName> <mountPath>” i.e. “VOLUME RFSVOL1 /ncptest/ncpvolumes/vol1″ in our case, from the ncpserv.conf file of each and every nodes of the cluster. Also, as mentioned at the end of step 4 (Note at the end of step 4), we need to update the scripts (Load, Unload) for binding and unbinding the resource with the virtual NCP server Object. These are what we are going to do in this section.

  1. In NCP Shares page of NRM, locate the share RFSVOL1.

    To dismount the volume, click on Unmount button next to the RFSVOL1. This button will toggle to Mount after dismount (screenshot below)

  2. Create the mount point, /ncptest/ncpvolumes/vol1, of the NCP share in all other nodes of the cluster. In our case, it is already done for first node, wgp-dt81 of our cluster, clusterC. So we need to create this path in another node, wgp-dt82 .To do this, open a terminal console as the root user in wgp-dt82 and type “mkdir -p /ncptest/ncpvolumes/vol1″
  3. Remove or comment out the line “VOLUME RFSVOL1 /ncptest/ncpvolumes/vol1″ from /etc/opt/novell/ncpserv.conf file on all the nodes, wgp-dt81, wgp-dt82 of clusterC.
  4. Restart the eDirectory (ndsd) daemon and NCP/NSS IPC daemon (ncp2nss) by entering the following commands in console terminal:

    rcndsd restart,

    /etc/init.d/ncp2nss restart.

    This is not required if the ncpserv.conf in not modified.
  5. And now, we need to update cluster resource’s Load, Unload, and Monitor scripts to control the mounting, dismounting of the NCP share RFSVOL1.
Notes/Tips: As mentioned above, what we will be updating in the load and unload scripts is to just add the lines to bind/unbind the servers and volumes, mounting/dismounting the NCP volumes/shares. So we just need to define the variables and add the corresponding command lines as show below.

So let us start with Load script.

In the load script, just add the italicized lines below.

Define the variables:

# define NCP volume
NCP_VOLUME=RFSVOL1
# define the NCP mount point
NCP_MOUNT_POINT=/ncptest/ncpvolumes/vol1
# define NCP server name
NCP_SERVER=NCP_RFSVOL1_SERVER

Then add the below two lines to mount and bind the volume and virtual NCP server.

# mount the NCP volume
exit_on_error ncpcon mount $NCP_VOLUME=252,PATH=$NCP_MOUNT_POINT
# bind the NCP volume
exit_on_error ncpcon bind --ncpservername=$NCP_SERVER --ipaddress=$RESOURCE_IP

Below is our final Load script of the cluster resource updated with NCP commands to bind and mount the shares:

Note: The italicized lines are the lines added for NCP

#!/bin/bash
. /opt/novell/ncs/lib/ncsfuncs
# define the IP address
RESOURCE_IP=164.99.103.89
# define the file system type
MOUNT_FS=reiserfs
#define the container name 
container_name=CSMcontainerForClusterC
# define the device
MOUNT_DEV=/dev/evms/$container_name/ReiserFSVolume
#define the mount point
MOUNT_POINT=/ncptest/ncpvolumes
#-------------------------------
# define NCP volume
NCP_VOLUME=RFSVOL1
# define the NCP mount point
NCP_MOUNT_POINT=/ncptest/ncpvolumes/vol1
# define NCP server name
NCP_SERVER=NCP_RFSVOL1_SERVER
#-------------------------------
#activate the container
exit_on_error activate_evms_container $container_name $MOUNT_DEV $NCS_TIMEOUT
#-------------------------------
# create the EVMS volume mount point if it does not exist
ignore_error mkdir -p $MOUNT_POINT
#-------------------------------
# mount the file system
exit_on_error mount_fs $MOUNT_DEV $MOUNT_POINT $MOUNT_FS
# add the IP address
exit_on_error add_secondary_ipaddress $RESOURCE_IP
#-------------------------------
# mount the NCP volume
 exit_on_error ncpcon mount $NCP_VOLUME=252,PATH=$NCP_MOUNT_POINT
# bind the NCP volume
exit_on_error ncpcon bind --ncpservername=$NCP_SERVER --ipaddress=$RESOURCE_IP
#-------------------------------
exit 0

Now, let us continue with Unload script

.

In the unload script, just add the italicized lines below.

Define the variables:

# define NCP volume
NCP_VOLUME=RFSVOL1
# define NCP server name
NCP_SERVER=NCP_RFSVOL1_SERVER

Then add the below two lines to unbind and dismount the NCP volume

# unbind the NCP volume
ignore_error ncpcon unbind --ncpservername=$NCP_SERVER --ipaddress=$RESOURCE_IP
# dismount the NCP volume
ignore_error ncpcon dismount $NCP_VOLUME

Below is our final Unload script of the cluster resource updated with NCP commands to unbind and dismount the shares:

Note: The italicized lines are the lines added for NCP

#!/bin/bash
. /opt/novell/ncs/lib/ncsfuncs

# define the IP address
RESOURCE_IP=164.99.103.89
# define the file system type
MOUNT_FS=reiserfs
#define the container name
container_name=CSMcontainerForClusterC
# define the device
MOUNT_DEV=/dev/evms/$container_name/ReiserFSVolume
#define the mount point
MOUNT_POINT=/ncptest/ncpvolumes
#-----------------------------------------------
# define NCP volume
NCP_VOLUME=RFSVOL1
# define NCP server name
NCP_SERVER=NCP_RFSVOL1_SERVER

# unbind the NCP volume
ignore_error ncpcon unbind --ncpservername=$NCP_SERVER --ipaddress=$RESOURCE_IP
# dismount the NCP volume
ignore_error ncpcon dismount $NCP_VOLUME
#-----------------------------------------------

# unmount the volume
sleep 10 # if not using SMS for backup, please comment out this line
exit_on_error umount_fs $MOUNT_DEV $MOUNT_POINT $MOUNT_FS

# del the IP address
ignore_error del_secondary_ipaddress $RESOURCE_IP

#deactivate the container
exit_on_error deactivate_evms_container $container_name $NCS_TIMEOUT

# return status
exit 0

And finally Monitor script. In Monitor script we do not need to add/modify/edit anything for NCP. So leave as it is.

This completes step 5: Prepare the NCP volume for cluster.

6. Demo: Verify the functionality of cluster on the cluster-enabled POSIX NCP Volume

With the completion of step 5, we are ready to bring up the cluster resource online so that we can start accessing the NCP volume, RFSVOL1 from NCP clients and check if NCP client(s) can still access same volume with minimum down time during resource migration or fail-over.

  1. In iManager, click on Clusters>Cluster Manager to bring up cluster manager page.

    Browse or type in clusterC to see all the resources of this cluster. Select the resource clusterC_RFSVOL1_SERVER and click online.

  2. Select the Online Node target. I have selected first node wgp-dt81 here. Click OK.

    Wait till the resource come online and running in the selected server, wgp-dt81 here (screenshot below).

  3. Now let us map the NCP volume from Novell client (red N). I have the Novell client installed on Windows. Let us start it to map the NCP share/volume, RFSVOL1. To do this, right click on the Novell client icon (red N) and select Novell Map Network Drive.. option to launch Map Drive.

  4. Fill in the path of the NCP share as “164.99.103.89/RFSVOL1″. Note that 164.99.103.89 is the IP address of the cluster resource clusterC_RFSVOL1_SERVER ,not the IP address of any of the servers in clusterC.. Then click the Map button to take us to the Novell Login page.

  5. Fill in the user name, password and context of the user to map the volume. Here I am using admin user. Then click OK.

  6. Now we can see the files and folder inside this folder. We can do some file I/O operation.

  7. Now let us check the connectivity and IO operation while resource is migrating from node wgp-dt81 to wgp-dt82. If our cluster configuration for this NCP volume is fine, then we will get temporary disconnection during the transition phase, i.e. till the resource is online to another node, wgp-dt82 and we will not be able to do any I/O operation. And once resource is online in the new node,wgp-dt82, we should be able to continue our I/O operation again. If we are not reconnected or resource is not coming online to another server, then we need to recheck our configuration. So let us migrate this resource from the current node,wgp-dt81 to another node wgp-dt82. To do this, open the Cluster Manager page of clusterC (screenshot below).
  8. Check the checkbox against the resource, clusterC_RFSVOL1_SERVER and click Migrate

  9. Select the Target node and click OK. Here wgp-dt82 is the only other server and hence it is by default selected.

    Once we click OK, this resource should get unloaded from wgp-dt81 and during this time we will get disconnected and no I/O operation can be performed.

    I got the message during this transition. Wait till the resource comes online on wgp-dt82 as shown below.

  10. Once the resource comes online on new node wgp-dt82, I could get access the same NCP volume and could continue with I/O. Cool! This means that my configuration is fine and working.

Cool! At this point we are done with the configuration of NCP volume over POSIX (reiserfs is used for this article) for fail over and migration in cluster (Novell Cluster Services) environment. In the same way we can do for ext3.

0 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 5 (0 votes, average: 0.00 out of 5)
You need to be a registered member to rate this post.
Loading...Loading...

Categories: Uncategorized

1

Disclaimer: This content is not supported by Novell. It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test it thoroughly before using it in a production environment.

1 Comment

  1. By:fowlen

    Speaking as someone who spent many an hour making this work pre sp1.
    The number of calls to support to finally get the syntax for the ncpcon comand as follows.

    ncpcon mount “Volume” volid=252@/mnt/volume1

    After a upgraded one of my cluster nodes and found the resource would not load, this doc provided the solution!!!!!!!

    But please can you tell me why the syntax was changed yet again?

    thanks
    Nigel

Comment

RSS