Novell Home

AppNote: How to Implement a 2 Node Cluster without External Shared Storage

Novell Cool Solutions: AppNote
By Eugene Phua

Digg This - Slashdot This

Posted: 11 Aug 2004
 

Eugene Phua
Primary Support Engineer
EPhua@novell.com

introduction

Often there is a need to provide high availability for application or web servers that provide critical services that are important but the data for these services are either static or small. Some of the applications or services that fall into this category are:

  1. Web Server that provides corporate information about the company.
    Often the web information tends to be static and the web data may not be very large. However, even if the corporate information does not change very often and therefore does not require much maintenance, it reflects badly on the company if the corporate web server goes down. You want to provide redundancy for the corporate web server.


  2. Application Server that provides services to Internet users, but because of security, the data is stored on a separate server located in another location, commonly, inside the firewall.
    One example of such an application is NetStorage. The NetStorage server is often located in the DMZ. The data is not stored on the NetStorage server, but Internet users can browse to the NetStorage servers and NetStorage will access users' files which are actually stored in another server inside the firewall.


  3. Proxy server.
    For companies that implement proxy servers, once the proxy servers go down, users will not be able to access the Internet. The quick solution is to ask the users to change the IP address to point to another proxy server or to change the internal DNS entry to point to the alternate proxy server. Nevertheless, there will still be downtime.

So the question most companies ask is how to provide high availability for the above services without incurring high costs. Let's examine the current available solutions.

current solutions

The following list of high availability solutions includes their pros and cons:

  1. DNS round robin
    All critical services and applications will be hosted on a multiple servers. Users accessing these services will use the fully qualified domain name, which will be resolved by DNS. Because a single domain name is tied to multiple IP addresses, the DNS will provide different IP addresses in a round robin manner to name resolution requests.
    Pro
    1. This is probably the cheapest and simplest solution that is being implemented today.
    2. This method provides load balancing
    Con
    1. This method does not provide redundancy in the strictest sense. If one server goes down, the administrator can remove the IP address of the down server from the DNS so that the DNS does not issue out the IP address of the down server. In a small, localized environment, the problem is quickly resolved. However, if the application serves Internet users, the DNS replication can take a long time.

  2. Layer 4 switch
    Users that access the applications are actually being resolved to the IP address of the Layer 4 switch. The layer 4 switch works by redirecting the requests to the application servers.
    Pro
    1. Besides redundancy, this method provides load balancing.
    Con
    1. This is a costly solution.

  3. Cluster Services
    Cluster services provides redundancy by placing the data on a shared storage. If one node goes down, the other node automatically takes over the application services and data so that the failover is completely transparent to the user.
    Pro
    1. The most common method for providing high availability, because it is also the most versatile. It provides redundancy for almost every service, including file and print, web services and application services.
    Con
    1. This is a costly solution.
    2. This method is generally not suitable if it is solely used for the type of services mentioned under the introduction section because of cost. Most small companies do not have the budget to set up a cluster just to host their corporate web pages.
    3. Generally, this does not provide load balancing

the solution

The solution is to implement a 2 Node Cluster service on NetWare 6.5. This solution is ideal for services that have data that are either static or small. A NetWare 6.5 cluster can be implemented without any shared storage and it has all the advantages of a cluster. Since the applications do not require data storage, this solution requires only 2 NetWare 6.5 servers, and this should be affordable for most companies.

Implementing Cluster Services without a Shared Storage

  1. Setup SBD partition on both cluster nodes

    On both cluster nodes, do the following:

    • LOAD NSSMU > Partitions
    • Press 'Insert' to create a new partition
    • Select the Free Disk space and press 'Enter'
    • Select iSCSI
    • Define the partition size (which will be the size of SBD partition) and create. You can choose 100MB as the size of the SBD partition
    • Optional: You can label the partition as 'SBD Partition'

  2. On the NW65SERVER server, type TON.NCF

    In the installation, TON.NCF is already loaded by default in the AUTOEXEC.NCF. In this case, you can type TOFF.NCF and then TON.NCF to reload the iSCSI target NLMs.


  3. Open up ConsoleOne, browse to the location of the NW65SERVER and you will see an iSCSI Target object that has been created. This object is automatically created when a iSCSI partition is created on a server and the 'TON.NCF' is loaded on that server. The object will look something like this:



    So if you have run TON.NCF on both servers, you will see iSCSI Target object for each Cluster Node server.


  4. Create an iSCSI Initiator Object for each Cluster Node, that is, 2 iSCSI Initiator Objects. There are two ways you can go about doing this:

    • You can create the iSCSI Initiator Object with the same name as the Cluster server but in a different context as the Cluster Node Server.

      This is because you cannot create the iSCSI Initiator Object with the same name as the server in the same context, which means that iSCSI Initiator Object must be created in a different context.


    • You can create the iSCSI Initiator Objects in the same context as the Cluster Server Node, but it will have a different iSCSI Initiator Object name from the Cluster Node server.

      In this example, I have created the iSCSI Initiator Objects (iSCSISERVER1 & iSCSISERVER2) in their respective server containers.

  5. You will get the following prompt but click OK and key in the object name.




  6. Right Click on the iSCSI Target object created in Step 3 and choose 'Trustees of this object'. Select both Initiator Objects created in Step 4 as a Trustee and click OK to select the default Trustee rights.

    In this example, the iSCSI Initiator Objects created are iSCSISERVER1.server1.novell & iSCSISERVER2.server2.novell. Remember to do this for both iSCSI Target Objects created in Step 3.




  7. On both Cluster Node Servers, type 'ION.NCF"


  8. On both Cluster Node Servers, type 'ISCSI LIST'



    You will see a screen similar to the one above.


  9. You need to change the initiator server's IQN to correspond to the Initiator Object that you have created in Step 4. To do the this, type the following on the server console:

    iscsi set InitiatorName=iqn.1984-08.com.novell:.[iSCSI Initiator Object Name].[ iSCSI Initiator Object Context].[ iSCSI Initiator Object Tree].

    Therefore, if you created the iSCSI Initiator object as .iscsiserver1.server1.novell.cluster-tree, the command you will type will be

    iscsi set InitiatorName=iqn.1984-08.com.novell: .iscsiserver1.server1.novell.cluster-tree.

    NOTE: ADD a Trailing '.' at the end of the ".iscsiserver1.server1.novell.cluster-tree." command, or you will not be able to connect.

    Remember in Step 4, you had a choice to create the iSCSI object to be the same name as the server, but in a different context; or in the same context as the server, but with a different name. Regardless of the choice, make sure that InitiatorName corresponds to the choice that you made.


  10. On both Cluster Node Servers, type 'ISCSI LIST' to verify that iSCSI initiator has been configured correctly.


  11. On the console screen of the 1st Cluster Node Server, type the following:

    • iscsinit connect [IP Address of 1st Cluster Node Server ]
    • iscsinit connect [IP Address of 2nd Cluster Node Server ]

  12. On the console screen of the 2nd Cluster Node Server, type the following:

    • iscsinit connect [IP Address of 1st Cluster Node Server ]
    • iscsinit connect [IP Address of 2nd Cluster Node Server ]

  13. Install Novell Cluster Services 1.7 on both Cluster Node Servers using Deployment Manager

    The details to install Novell Cluster Services can be found in NetWare 6.5 - Novell Cluster Services 1.7 Administration Guide


  14. Create a New Cluster




  15. Type Cluster Object, Tree Name and Context




  16. Select the servers to be part of the Cluster




  17. Define Cluster IP Address




  18. Choose 'Yes' for Cluster Shared Media and 'Yes' for Mirroring the Cluster Partition. If Step 11 is done to connect to the local and remote iscsi partition, you will see a 2nd device on which the mirror partition can be created.




  19. Choose 'Yes' to start the cluster automatically.



    Once the above steps are done, the cluster should be setup.


  20. In the AUTOEXEC.NCF file, make the following changes

    ion.ncf
    ton.ncf
    Delay 5
    iscsinit connect [IP address of its own CLUSTER NODE SERVER]
    iscsinit connect [IP address of the other CLUSTER NODE SERVER]
    Delay 5
    LDNCS.NCF

    NOTE: Make sure that cluster service is loaded after the server is connected to the iSCSI partition


  21. When one cluster node server goes down, and after it comes back online, on the other cluster node type the following command manually:

        iscsinit connect [IP address of the CLUSTER NODE SERVER that went down]

    This is to make sure that the SBD partition is re-mirrored.




  22. NOTE:
    The biggest problem with this configuration is that when Node B goes down and after it restarts, Node A will have lost the iSCSI communication with Node B. The issue is that Node A will not reestablish iSCSI communication with Node B automatically. When that happens, there is no way the SBD partition can be re-mirrored and if Node A goes down, the entire cluster will abend because the active SBD partition is on Node A. To workaround this problem, customers have to reconnect the ISCSI connection manually from Node A to Node B to remirror the SBD partition

  23. Once the above steps are completed, you will have a working cluster without a shared storage. The SBD Partition is stored locally on the iSCSI partition of one cluster node and mirrored to the other iSCSI partition on the other cluster node.

conclusion

After a successful implementation, you will realize that this setup does not allow you to create a cluster resource with a cluster volume resource. As mentioned in the introduction section, this setup provides high availability for application or web servers whose data is either static or small. The services that may require this setup are Web, Application or Proxy Servers.


Novell Cool Solutions (corporate web communities) are produced by WebWise Solutions. www.webwiseone.com

© 2014 Novell