1.8 Load-Balancing and Clustering Novell Teaming

NOTE:In this release, Novell Teaming does not support session replication across cluster nodes.

IMPORTANT:If you have more than one clustered installation of Novell Teaming in your network, each cluster group needs a unique multicast group IP address to prevent them from interfering with each other.

To set up a scalable, clustered Novell Teaming configuration, use the following steps as a guideline:

  1. Set up a shared file storage that is accessible to all nodes.

    Novell Teaming’s Simple File Repository is safe for use in a clustered environment with shared storage that is accessible to all nodes. Setting up shared storage is a platform-specific task.

  2. Install and configure the Lucene Index Server.

    See Installing a Standalone Lucene Index Server for how to place the Lucene Index Server on a dedicated server system. All cluster nodes share this index server.

  3. Make sure that the time is the same on all nodes in the cluster (using synchronized time services for your cluster is highly advised).

  4. Install the Novell Teaming/portal bundle kit on each cluster node.

    IMPORTANT:Because all the nodes share the same database server, it is important not to execute the database-initialization SQL scripts more than once (that is, use the installer’s Reconfigure option rather than the New installation option on all but the first cluster node).

    Each node in the cluster can use the same installer.xml file (generated by the installer during initial installation). This assures that cluster nodes are configured uniformly.

    The following settings must be uniform across the cluster:

    • Configure the database connection settings on each node to use the same database.

    • Set the file system settings to all point to the same shared file storage server.

    • Set the Network settings (name, port, etc.) to the name of the LOAD BALANCER system.

    • In the Lucene configuration window of the installer, select server in the Lucene configuration type drop-down list and set the Host selection to the hostname of the machine on which you installed the Lucene Index Server. See Installing a Standalone Lucene Index Server.

  5. Set the portal up to work in a clustered environment.

    If you need to set more advanced configurations, use cache.cluster.properties instead of cache.cluster.multicast.ip in the steps below. For more information, refer to Liferay's documentation at wiki.liferay.com and http://wiki.liferay.com. Liferay provides additional high availability information at wiki.liferay.com.

    1. Edit the /webapps/ROOT/WEB-INF/classes/portal-ext.properties file by uncommenting the following two lines:

      cache.event.listeners=com.opensymphony.oscache.plugins.clustersupport.JavaGroupsBroadcastingListener
      cache.cluster.multicast.ip=231.12.21.100
      
    2. Edit the /webapps/ROOT/WEB-INF/classes/cache-multi-vm-ext.properties file by uncommenting the following two lines:

      cache.event.listeners=com.opensymphony.oscache.plugins.clustersupport.JavaGroupsBroadcastingListener
      cache.cluster.multicast.ip=231.12.21.101
      
  6. Configure the load balancer.

    There are a variety of load balancing solutions that work with J2EE deployments. The following example configuration uses the balancer module built into the newer Apache (version 2.2.4), and is based on a widely used sticky session technique. Novell Teaming does not support session sharing/replication among Tomcat instances.

    1. Edit Tomcat’s server.xml.

      Add jvmRoute="jvm<n>" to the<Engine name="Catalina" ...> element, where<n> should be an integer unique to each Tomcat instance. For example, assuming you have two Tomcat instances in a cluster, the Catalina engine element should look like the following in each server.xml respectively (jvm<n> is the name of the worker as declared in the load balancer):

      <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
      <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm2">
      
    2. Edit Apache’s httpd.conf file (in the<apache installation>/conf directory).

      1. Uncomment the following three lines:

        LoadModule proxy_module modules/mod_proxy.so
        LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
        LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
        
      2. Append the following section to the end of the file:

        <Location /balancer-manager>
        SetHandler balancer-manager
        Order deny,allow
        Deny from all
        Allow from 127.0.0.1
        </Location>
        <Proxy balancer://aspenCluster>
        BalancerMember ajp://<tomcat host name 1>:8009 route=jvm1
        BalancerMember ajp://<tomcat host name 2>:8009 route=jvm2
        </Proxy>
        <Location />
        ProxyPass balancer://aspenCluster/ stickysession=JSESSIONID
        </Location>
        

        Substitute the real Tomcat hostnames for<tomcat host name 1> and<tomcat host name 2>. If you have more than two Tomcat instances in the cluster, make an additional line for each.

  7. Configure Hibernate second-level cache to use cluster-safe distributed cache. To do this, rename the ehcache-hibernate.xml file in the webapps/ssf/WEB-INF/classes/config directory to something else, say, ehcache-hibernate-non-clustered.xml. Then, rename the ehcache-hibernate-clustered.xml file in the same directory to ehcache-hibernate.xml.

  8. Start the Lucene Index Server, and then start the application cluster nodes.