Novell Home

Configuring NFSv4 on Active/Passive iSCSI-based High Availability Clusters

Novell Cool Solutions: AppNote
By Yatin P. Manerker

Digg This - Slashdot This

Posted: 13 Jul 2006
 

This AppNote describes how to configure NFSv4 on iSCSI-based Linux High Availability (HA) clusters.

Table of Contents

  1. Cluster Terminology
  2. Minimum System Requirements
  3. Installation
  4. iSCSI Target Configuration on Linux
  5. iSCSI Initiator Configuration on Linux
  6. Configuring Heartbeat on Nodes
  7. Configuring Resources for NFSv4
  8. Miscellaneous
  9. Reference

1. Cluster Terminology:

Following are the few terms used in clustering.

Clustering:
Clustering is a technique in which two or more servers are interconnected and can access a common storage pool. Cluster Services enables you to combine a number of servers together into a single group, known as a cluster. One of the primary benefits of Cluster Services is that it provides failover capabilities to your server clusters. If one server in the cluster should happen to fail, another server automatically recovers the downed-server's resources and runs in its place.

iSCSI:
The iSCSI protocol combines the use of block-level data movement with TCP/IP networks. By allowing SCSI commands to travel through IP networks, high-speed IP networking technology can carry data from storage units to servers anywhere throughout a corporate network. Also referred to as IP storage

iSCSI Target and Initiators:
Target is the device which is mostly configured as the common storage for all the hosts in a network. Any server can be configured as iSCSI target. It is a virtual disk which is created on the Linux server and allows the remote access over an Ethernet connection by iSCSI initiators.

Initiator is any node in the network which is configured to contact the target for services. In general your target should be always up and running so that any host acting as initiator will be able to contact the target.

Active/Passive Clusters:
In Active passive clusters, only one node is active at a time. Other nodes get the resource only in case of failover. When failover takes place, the resource migrates from the node where there is failure to the other node. After migration, it restarts all the services on the current node. And now this node becomes active. When the failed node resumes, the resource migrates back to its original node and this is called failback.

High Availability (HA):
This is done in order to improve the availability of the resource to the node at all times. When one node in a cluster goes down, the resource migrates to another node in the cluster and this node takes the responsibility of servicing so the resource is available at all times. HA clusters are designed for fault tolerance.

2. Minimum System Requirements:

Three server class machines:

  • Two machines for Linux iSCSI initiator.
  • One Linux machine for iSCSI target

Software requirements:

  • SUSE Linux Enterprise Server 10

3. Installation:

Before installation of target and initiators on the machine, make sure that you have enough space to configure a common storage device. This machine will act as a target. When the hardware is selected, install iSCSI target packages from YaST2.

On the other two nodes, install the iSCSI initiator packages. This can be done after installing SLES 10.

Make sure that you have installed the heartbeat packages on all of the nodes. This can be done later.

4. iSCSI Target Configuration on Linux

Follow the steps below to configure the iSCSI target server:

  1. Create a block device on the target machine.
    • Type yast2 disk in the terminal.
    • Create a new Linux partition and select Do not format.
    • Do not mount the partition.
    • Select partition size depending upon the usage.




  2. This partition will be used as common storage for all the nodes on the HA clusters.

  3. In the terminal type:
    yast2 iscsi-server
    If the iSCSI target is not installed on the server, click Continue on the popup window to install the iSCSI target software. (This popup window will show only if the iSCSI target server packages are not installed on the system).

  4. Click on the Service tab and select When Booting in Service Start.

  5. Go to Global and click Next to continue. In the next screen go to Targets and click on the Add option and enter the path as /dev/hdax as created in step 1. If there is already a target, then delete the target and add a new one.

  6. Click Next to finish the target server configuration.

  7. To check the iSCSI target, in the terminal type:
    cat /proc/net/iet/volume

5. SCSI Initiator Configuration on Linux

Follow the steps below to configure the iSCSI initiators. These steps should be repeated on every node of the HA clusters.

  1. In the terminal type:
     yast2 iscsi-client
    If the iSCSI client is not installed on the server, click Continue on the popup to install the iSCSI Initiator software. (This popup will show only if the iSCSI client packages are not installed on the system).


  2. In the first screen of iSCSI Initiator configuration, click on the Service tab and select When Booting in Service Start.


  3. Click on the Connected Targets tab -> click Add and enter the IP address of the iSCSI target server.

    Click Next to continue, click on Connect. This will change the status of FALSE to TRUE.

    Click on Toggle Start-up to change the start-up from manual to automatic.




  4. Click on the Discovered Targets tab -> click Discovery. Enter the IP address of the local server. Click Next to complete the configuration of the iSCSI Initiator.




  5. Type:
    sfdisk -l
    This will display the iSCSI block device.




  6. Check the status of the Connected Initiators on the target server.

    Type:
     cat /proc/net/iet/session
    This will display the connected initiators.




Note: From the YaST partition, select the Virtual Drive and create a partition. Do not mount the partition. (/dev/sda1). This should be done only on one node. On all of the other nodes, this partition will be visible once the iSCSI client is started.



6. Configure Heartbeat on Nodes

  1. On node 1, type:
    yast2 heartbeat
    (If heartbeat is not installed, there will be a popup. Click Continue to install heartbeat.)


  2. In the heartbeat node configuration, enter the Node Name. If you have more then two nodes then enter each node name and click Add.




  3. Click Next to continue to Heartbeat Authentication Keys.




  4. Check Broadcast and select Device. Click on Add and then Continue.




  5. In the Heartbeat Start-up Configuration screen, check Start Heartbeat Server Now and when Booting.

    This covers the configuration on node 1.




  6. From the same node, propagate the heartbeat configuration file to each and every node in the cluster. This is done by typing in the terminal:
    /usr/lib/heartbeat/ha_propogate 
    The authkeys and ha.cf files will be propagated to each and every node of the HA clusters.




  7. Note: We don't have to do the configuration on all of the nodes of the HA clusters. Once ha_propagate is successful, all of the nodes will have the common configuration files.

  8. Once this is done, go to User and Group Administration and change the password for the hacluster user.

    This user will be in the system user option and will be used to monitor the cluster resource using the Linux HA Management utility.



    Now we need to start heartbeat on other nodes. Type:
    /etc/init.d/heartbeat start

7. Configuring Resources for NFSv4

  1. Start Heartbeat GUI utility.
    hb_gui
  2. Click on the connection -> Login. A Login window will come up.
    The Username will be displayed as hacluster.
    Enter the password for hacluster.




  3. After a successful login, it will display both of the nodes.




  4. Create Cluster Resource: We need to create four cluster resources for NFSv4.

    Resource 1: Mounting File System

    Click on Add > New and select native.



    Enter the Resource ID as mountfs and select group. Select Filesystem Resource Agent. Click on Add Parameter to add the parameters device name:
    (/dev/sda1) and mount point (/mnt).


    Resource 2: Assigning IP address

    Click on Add > New and select native. Enter the Resource ID as ipaddr and select group. Select IPaddr and enter the Virtual IP Address. This is the secondary IP address to be assigned to the cluster resource.



    Resource 3: Starting idmapd

    Click on Add > New and select native. Enter the Resource ID as idmapd and select group. Select idmapd from the resource.



    Resource 4: Starting nfsserver

    Click on Add > New and select native. Enter the Resource ID as nfsserver and select group. Select nfsserver from the resource.





  5. This completes the Resource Configuration for NFSv4.


  6. NFSv4 Server Configuration:

    On each and every node, modify the /etc/export file and add export path. The export path should be common on all of the nodes of HA clusters. If you are mounting shared resource on /mnt then give /mnt as the export path for NFSv4.


  7. Move the /var/lib/nfs folder to the shared disk and create link in nfs in /var/lib to point to the shared disk
    nfs ->/mnt/var/lib/nfs


  8. By default, Auto Failover and Failback will be enabled.


  9. Click on Group and start the Resource.

8. Configuration Files

iSCSI Target:

  • Restart iscsi target manually : /etc/init.d/iscsitarget restart
  • Start iscsi target manually : /etc/init.d/iscsitarget start
  • Stop iscsi target manually : /etc/init.d/iscsitarget stop
  • Display iSCSI target on the server : cat /proc/net/iet/volumes
  • Display initiators connected to the target: cat /proc/net/iet/sessions
  • iSCSI target configuration file : /etc/ietd.conf
  • iSCSI port : 3260 (This port should be opened)
  • List of connected target listening to port : netstat -nap | grep ":3260"

iSCSI Initiator:

  • Start iSCSI initiator: /etc/init.d/open-iscsi start
  • Discovered target : iscsiadm -m discovery -tst -p <IP address>
  • Check iSCSI virtual disk : sfdisk -l

Heartbeat:

  • Start heartbeat : /etc/init.d/heartbeat start
  • Stop heartbeat : /etc/init.d/heartbeat stop
  • Heartbeat configuration files ha.cf and authkeys: /etc/ha.d

9. Reference:


Novell Cool Solutions (corporate web communities) are produced by WebWise Solutions. www.webwiseone.com

© 2014 Novell