Novell Home

How to Build a Mixed OES Cluster using iSCSI

Novell Cool Solutions: Feature
By Andreas Ollenburg

Digg This - Slashdot This

Posted: 4 May 2006
 

This tutorial shows you how to create a cluster on servers running OES NetWare and OES Linux. The SAN will be running on iSCSI on another OES NetWare server serving as an iSCSI target.

Why the SAN on iSCSI?

iSCSI evolved and became a good solution for companies who want to have a solid storage solution but cannot afford "big" solutions running fibre channel. Creating an extra LAN for iSCSI running at least 1GB/s is cheaper and easier to implement, though not as stable and fast as Fibre Channel.

Why the iSCSI target on NetWare?

There is a open source iSCSI-Target software running on Linux but it still is not stable and just for experimental use. Thus, either you have to buy one of the iSCSI boxes available on the market or you can use NetWare. And why buying another third-party solution when you have a stable and solid solution at hand?

The demo works as followed:

  1. Create a Tree
  2. Prepare the iSCSI Target
  3. Prepare the NetWare Initiators
  4. Create the Cluster
  5. Prepare the Linux Initiator
  6. Add the OES Linux Server to the Cluster
  7. Conclusion

1. Create a Tree

First of all we need a tree running eDirectory with at least the four servers we need for the cluster. I will not go much more in detail here as you may already now how to do that. If not, there is a good description in the OES documentation.

There are some specials for the software which has to be installed on the servers<:/p>

  • The OES NetWare target just needs the iSCSI target software. There is a patterned deployment option when installing the server.
  • The OES Linux node will need NSS installed during installation. Also, install the iscsi-packages found under the "Various Linux tools"-Selection. Do not install the Novell Cluster Services. We will do it later.

2. Prepare the iSCSI Target

On the system console, enter ton.ncf to load the modules respnsible for the target software. After all NLM's are loaded, edit the autoexec.ncf and add the ton.ncf command near the end of the file.

You may have to configure the target for LDAP authentication. Open the Remote Manager on the target and follow the iSCSI-link in the left frame. In the main frame, click on the LDAP-link and, login via LDAP and enter the required LDAP information.

Now, we need an iSCSI partition to be used as the shared device by the cluster nodes. You can create one via the Remote Manager or nssmu.

After the partition is created, you will find an eDirectory object for the iSCSI-target in the target server context. This will be the point where the target server will check whether an initiator is allowed to connect to the target or not. To allow the server to connect, you have to add the NCP-server objects representing the initiators as trustees with default rights to that target-object.

3. Prepare the NetWare Initiators

Next, we will configure the OES NetWare servers to act as iSCSI initiators and connect to the target.

On the system comsole, type ion.ncf. This will load all modules for the initiator. When the modules are loaded add this command to the autoexec.ncf.

You can connect the targets using the Remote Manager. Under the iSCSI link on the left you are able to connect to the target and add the iSCSI partition. If you cannot see the partition, review your trustee assignment.

Add the following command to either autoexec.ncf after the ion.cnf-line or to the ion.ncf itself:

iscsinit connect <IP/DNS of target> iqn.1984-08.com.novell:<DN of iSCSI target object>

You can test the connection using the command "list devices" on the system console.

4. Create the cluster

Now, you are ready to install the NCS software on the OES NetWare nodes. You can do this via the "Deployment Manager" using a Windows workstation. If you need instruction, please consult the NCS documentation on the Novell web site.

Review the health of the cluster using the server screens, iManager, Remote Manager and/or ConsoleOne.

5. Prepare the Linux initiator

As we now have the cluster up and running, we are close to add the linux node. This server first needs to connect to the iSCSI target as well to get access to the SBD partition.

First, the initiator needs infos about the target. Edit /etc/iscsi.conf and add the following lines:

Discovery address = <IP/DNS of target>
Target name = iqn.1984-08.com-novell:<DN of target object>

Next, the software needs to know how to authenticate against the target server. The initiator will do it using the DN of its NCP-Server-object. Edit the file /etc/initiatorname.iscsi and change it to something like this:

Initiator Name = iqn.1987-05.com.cisco:<DN of NCP server object>

Start the iSCSI daemon with /etc/init.d/iscsi start and change the runlevels using chkconfig iscsi 35.

Test the connection with iscsi-ls or sfdisk -l.

On the X-Windows desktop on the OES Linux server you will see a pop-up about new hardware. Configure it by clicking "OK". The YaST-Partioner will open and you will see a new SCSI-disk with one partition. This is the SBD-partition on the iSCSI target. Leave everything as it is and quit YaST.

6. Add the OES Linux Server to the Cluster

Now, the big moment has come. Everything is prepared and we are now joining the OES Linux node to the cluster. Using YaST, add the Novell cluster software.



Then, configure it using YaST -> System -> Novell Cluster Services. Enter the required information for authentication, and the add the server to the existing cluster you created before.

The start script for NCS /etc/init.d/novell-ncs should be started and inserted in the runlevels by YaST. You can test the SBD access using sbdutil -f or sbdutil -v.

Check the cluster. Everything is working? Where are the whistle and bells? ;-)

Conclusion

You've done it. The mixed cluster is up an running. Now, you can start creating e.g. cluster volumes as described in the NCS documentation and migrate them between the servers.

There is one point to mention: when the volume is created on a NetWare server and fails over to the Linux node for the first time it will go in a comatose state. If you take in offline and then online again it will start failing over between the nodes and you won't see any more problems.


Novell Cool Solutions (corporate web communities) are produced by WebWise Solutions. www.webwiseone.com

© 2014 Novell