Cool Solutions

Teaming Cluster on a Laptop using XEN, OCFS2 and Linux Virtual Services



By:

May 27, 2010 5:34 pm

Reads:6,275

Comments:0

Score:5

The purpose of this lab guide is to show, on a small scale, how a Teaming Clustered system is put together. The Lab will use a XEN Host, 2 Teaming (Tomcat) Servers, 2 Index Servers (Lucene), 1 MySQL Server. This lab assumes you know your way around SLES and XEN but provides significant guidance with OCFS2 and Linux Virtual Services.

See the Adv Installation section of the Teaming Installation Guide for more information

Install and Configure the Dom0: SLES11SP1 with XEN

The first step is to install the XEN host. A minimum of 8G of memory is recommended.

Install and configure XEN

OCFS2 Installation and Configuration

Next Install OCFS2

Dom0:~# rpm -qa|grep ocfs2 
ocfs2-kmp-default-1.4_2.6.32.12_0.6-4.10.14 
ocfs2-tools-o2cb-1.4.3-0.11.19 
ocfs2console-1.4.3-0.11.19 
ocfs2-tools-1.4.3-0.11.19 

With SLES11 and beyond; OCFS2 comes from the HA extension.

Create a partition – Do not format, or Mount

Dom0:~#chkconfig o2cb on
Dom0:~#chkconfig ocfs2 on

Now go ahead and format it using ocfs2:

Dom0:~# mkfs.ocfs2 -L xen_cluster /dev/sda4 
mkfs.ocfs2 1.4.2 
Cluster stack: classic o2cb 
Filesystem label=xen_cluster 
Block size=4096 (bits=12) 
Cluster size=4096 (bits=12) 
Volume size=22274056192 (5438002 clusters) (5438002 blocks) 
169 cluster groups (tail covers 18994 clusters, rest cover 32256 clusters) 
Journal size=139210752 
Initial number of node slots: 8 
Creating bitmaps: done 
Initializing superblock: done 
Writing system files: done 
Writing superblock: done 
Writing backup superblock: 3 block(s) 
Formatting Journals: done 
Formatting slot map: done 
Writing lost+found: done 
Formatting quota files: done 
mkfs.ocfs2 successful 

Run ocfs2console tool to add Dom0 node entry to cluster configuration (/etc/ocfs2/cluster.conf)

Cluster / Configure Nodes

This will create /etc/ocfs2/cluster.conf

We’ll change it later when we add the DomUs.

Dom0:~# less /etc/ocfs2/cluster.conf
node: 
        ip_port = 7777 
        ip_address = xxx.xxx.137.23
        number = 0 
        name = Dom0 
        cluster = ocfs2 

cluster: 
        node_count = 1 
        name = ocfs2
		
		

Next run o2cb configure

Dom0:~ # /etc/init.d/o2cb configure 
Configuring the O2CB driver. 
This will configure the on-boot properties of the O2CB driver. 
The following questions will determine whether the driver is loaded on 
boot.  The current values will be shown in brackets ('[]').  Hitting 
<ENTER> without typing an answer will keep that current value.  Ctrl-C 
will abort. 
Load O2CB driver on boot (y/n) [y]: 
Cluster stack backing O2CB [o2cb]: 
Cluster to start on boot (Enter "none" to clear) [ocfs2]: 
Specify heartbeat dead threshold (>=7) [31]: 
Specify network idle timeout in ms (>=5000) [30000]: 
Specify network keepalive delay in ms (>=1000) [2000]: 
Specify network reconnect delay in ms (>=2000) [2000]: 
Writing O2CB configuration: OK 
Cluster ocfs2 already online 

After running /etc/init.d/o2cb configure, here’s the OCFS2 installed modules you should see:

Dom0:~#lsmod
ocfs2_dlmfs            20408  1 
ocfs2_stack_o2cb        6792  0 
ocfs2_dlm             170824  2 ocfs2_dlmfs,ocfs2_stack_o2cb 
ocfs2_nodemanager     213800  11 ocfs2_dlmfs,ocfs2_stack_o2cb,ocfs2_dlm 
ocfs2_stackglue        16312  1 ocfs2_stack_o2cb 
configfs               41496  2 ocfs2_nodemanager 

If the RPMs install fine but you do not see the modules,
verify that you have the correct versions.
further troubleshooting:

http://oss.oracle.com/projects/ocfs2/dist/documentation/v1.2/01-cluster_start_stop.pdf

Watch dmesg for indication of problems… below the load was successful

Dom0~#dmesg
OCFS2 Node Manager 1.5.0 
[   21.829433] OCFS2 DLM 1.5.0 
[   21.855134] ocfs2: Registered cluster interface o2cb 
[   21.875734] OCFS2 DLMFS 1.5.0 
[   21.876109] OCFS2 User DLM kernel interface loaded 
[   23.561971] BIOS EDD facility v0.16 2004-Jun-25, 1 devices found 
[   25.226601]   alloc irq_desc for 448 on node 0 
[   25.226604]   alloc kstat_irqs on node 0 
[   25.231692]   alloc irq_desc for 449 on node 0 
[   25.231692]   alloc kstat_irqs on node 0 
[   25.231708] suspend: event channel 41 
[  399.404878] ocfs2_dlm: Nodes in domain ("866418CE7D6C44938A340EAD9CADFC32"): 0 

To summarize: We’ve created an unformatted partition. Using that partition we’ve created a cluster of 1 ( the Dom0). When we build out the DomUs they will be added to the cluster. We’ll revisit the ocfs2 config at that time.

XEN VMs Installation

Install 5 VMs with SLES11SP1 – carve up the memory appropriately i.e. Dom0 8G / 1G / 1G

  1. Lucene Node 1
  2. Lucene Node 2
  3. MySQL
  4. Teaming 1
  5. Teaming 2

The 2 servers Teaming 1 and Teaming 2 need access to the same disk. In the real world this might be a SAN to store Terabytes of content. Teaming allows for more than 2 Tomcat servers but for the simplicity of this lab, 2 will prove the concept.

Once the Base OS is installed on Teaming 1 (VM) and Teaming 2 (VM), go back to the Dom0 (Host) and add in the shared disk.

Shutdown the VMs (Teaming 1 and Teaming 2)
On the Host Dom0, edit the /etc/xen/vm/DomU file to add ‘phy:/dev/sda4,hdd,w!’

disk=[ 'file:/images/DomU1/disk0,xvda,w', 'phy:/dev/sda4,hdd,w!' ]

Note the “!” this is very important. Do not omit this!

Force the changes to be implemented. Starting and stopping the VMs is not enough

Dom0~#cd /etc/xen/vm
Dom0~#xm delete Teaming1
Dom0~#xm create ./Teaming1

Repeat for Teaming2

Note this procedure does NOT delete the image. It just deletes (and recreates the XEN DB config) the meta data that is associated to it.

Check it from the VM:

Teaming1:~ # hwinfo –disk
05: None 00.0: 10600 Disk                                       
  [Created at block.243] 
  UDI: /org/freedesktop/Hal/devices/computer_storage_unknown_disk 
  Unique ID: +b7l.Fxp0d3BezAE 
  Parent ID: cBJm.hKT3jlotBo5 
  SysFS ID: /class/block/xvda 
  SysFS BusID: vbd-51712 
  SysFS Device Link: /devices/xen/vbd-51712 
  Hardware Class: disk 
  Model: "Disk" 
  Driver: "vbd" 
  Driver Modules: "xenblk" 
  Device File: /dev/xvda 
  Device Number: block 202:0-202:15 
  Geometry (Logical): CHS 1566/255/63 
  Size: 25165824 sectors a 512 bytes 
  Config Status: cfg=no, avail=yes, need=no, active=unknown 
  Attached to: #2 (Storage controller) 

06: None 00.0: 10600 Disk 
  [Created at block.243] 
  UDI: /org/freedesktop/Hal/devices/computer_storage_unknown_disk_0 
  Unique ID: aQVk.Fxp0d3BezAE 
  Parent ID: caE1.hKT3jlotBo5 
  SysFS ID: /class/block/hdd 
  SysFS BusID: vbd-5696 
  SysFS Device Link: /devices/xen/vbd-5696 
  Hardware Class: disk 
  Model: "Disk" 
  Driver: "vbd" 
  Driver Modules: "xenblk" 
  Device File: /dev/hdd 
  Device Files: /dev/hdd, /dev/disk/by-uuid/3807d378-466f-439b-b40d-a0ee1e4a1b04, /dev/disk/by-label/xen_cluster 
  Device Number: block 22:64-22:127 
  Geometry (Logical): CHS 2708/255/63 
  Size: 43504020 sectors a 512 bytes 
  Config Status: cfg=new, avail=yes, need=no, active=unknown 
  Attached to: #3 (Storage controller)

   

On the Teaming1 and Teaming2 Servers:

Modify the /etc/hosts and include the VM name as well as the host name for the other teaming DomU servers. Even though the /etc/ocfs2/cluster.conf includes an ip address the names that are listed for each ocfs2 node need to resolve to the appropriate ip address.

Install the ocfs2 modules:

Follow the prompt to reboot the Teaming1 and 2 VMs

Teaming1:`#mkdir /etc/ocfs2/
create a new cluster.conf

node: 
        ip_port = 7777 
        ip_address = xxx.xxx.137.58   
        number = 1
        name = Teaming1
        cluster = ocfs2 
node: 
        ip_port = 7777 
        ip_address = xxx.xxx.137.59    
        number = 2
        name = teaming2
        cluster = ocfs2 
cluster: 
        node_count = 2 
        name = ocfs2 
		
		

copy the cluster.conf to the other DomU (Teaming2)

on both Teaming Servers:

Teaming1:~#chkconfig o2cb on
Teaming1:~#chkconfig ocfs2 on
Teaming1:~#/etc/init.d/o2cb configure

Add shared disk to fstab

Teaming1:#less /etc/fstab
/dev/xvda1           swap                 swap       defaults              0 0 
/dev/xvda2           /                    ext3       acl,user_xattr        1 1 
proc                 /proc                proc       defaults              0 0 
sysfs                /sys                 sysfs      noauto                0 0 
debugfs              /sys/kernel/debug    debugfs    noauto                0 0 
devpts               /dev/pts             devpts     mode=0620,gid=5       0 0 
/dev/hdd            /ocfs2               ocfs2      _netdev               0 0 

Teaming1:~#mkdir  /ocfs2
Teaming1:~#mount -t ocfs2 /dev/hdd /ocfs2
Teaming1:~#cd /ocfs2
Teaming1:~#touch success 

change to Teaming2
Teaming2:~#mkdir  /ocfs2
Teaming2:~#mount -t ocfs2 /dev/hdd /ocfs2
Teaming2:~#cd /ocfs2
Teaming2:~#ls -l 
total 4 
drwxr-xr-x  3 root root 3896 2010-05-19 14:54 ./ 
drwxr-xr-x 23 root root 4096 2010-05-19 14:34 ../ 
drwxr-xr-x  2 root root 3896 2010-05-18 16:31 lost+found/ 
-rw-r--r--  1 root root    0 2010-05-19 14:54 success 

That completes the OCFS2 configuration portion.

Teaming Install

Use the Teaming Installation Guide to build out the teaming systems. Look for Multi-Server Configurations and Clustering in the installation guide.

Be sure to use the newly created shared disk in the “Data location” rather than the defaults on the teaming1 and teaming 2 installs.

Also for the simplicity ( of this lab) , use port 80 for the tomcat service (run as root).

Note this is a significant security hole. Do not run Tomcat as “root” on your production system.

Select a Virtual IP ADDR or DNS Host for the clustered Teaming/Tomcat servers. To the Browser the clustered system needs to appear as one server.

Test the Teaming1 and 2 servers by accessing them directly to make sure they are functional before proceeding

Linux Virtual Services

See http://www.ultramonkey.org/papers/lvs_tutorial/html for help with LVS.

The Linux Virtual Server has three different ways of forwarding packets: Network Address Translation (NAT), IP-IP encapsulation or tunnelling and Direct Routing.

For this guide, I chose:

Direct Routing: Packets from end users are forwarded directly to the real server. The IP packet is not modified, so the real servers must be configured to accept traffic for the virtual server’s IP address. This can be done using a dummy interface, or packet filtering to redirect traffic addressed to the virtual server’s IP address to a local port. The real server may send replies directly back to the end user. That is if a host based layer 4 switch is used, it may not be in the return path.

First thing we need to accomplish is to add a second IP address to be used as the one stop shop address for all of the Teaming Tomcat servers. This is the virtual address that was referenced above. While it’s relatively easy to add a secondary address to none XEN host, it’s a bit tricky with a XEN bridge.

Here’s the XEN host:

Work around :On the Non XEN host (with a secondary addr) take a look at /etc/sysconfig/network/ifcfg-eth0
copy over the 3 lines that it added to the end of XEN host’s /etc/sysconfig/network/ifcfg-br0.

IPADDR_L4_addr='131.156.137.33' 
NETMASK_L4_addr='255.255.252.0' 
LABEL_L4_addr='L4_addr' 

Dom0:~ #rcnetwork restart
Shutting down network interfaces: 
br0  done 
    eth0      device: Intel Corporation 82566DM Gigabit Network Connection (rev 02) 
              No configuration found for eth0 
              Nevertheless the interface will be shut down.   done 
    vif13.0   
              No configuration found for vif13.0 
              Nevertheless the interface will be shut down.                                                                                                              done 
    vif17.0   
              No configuration found for vif17.0 
              Nevertheless the interface will be shut down.                          done 
Shutting down service network  .  .  .  .  .  .  .  .  .                                       done 
Setting up network interfaces: 
    eth0      device: Intel Corporation 82566DM Gigabit Network Connection (rev 02) 
              No configuration found for eth0                                    unused 
    vif13.0   
              No configuration found for vif13.0                                                        unused 
    vif17.0   
              No configuration found for vif17.0                                                         unused 
    br0       
    br0       Ports: [eth0] 
    br0       forwarddelay (see man ifcfg-bridge)  not ready. Proceeding in background. 
    br0       IP address: xxx.xxx.xxx.xxx/22   
   br0:L4_addr IP address: xxx.xxx.xxx.xxx/22  

Dom0:/etc/xen/vm # ip addr 

 br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 00:1c:c0:ed:43:09 brd ff:ff:ff:ff:ff:ff 
    inet xxx.xxx.xxx.xxx/22 brd xxx.xxx.xxx.xxx scope global br0 
    inet xxx.xxx.xxx.xxx/22 brd xxx.xxx.xxx.xxx scope global secondary br0:L4 _addr
    inet6 fe80::21c:c0ff:feed:4309/64 scope link 
       valid_lft forever preferred_lft forever 
	   

Add L4 or Linux Virtual Services

Use yast2 software management to add ipvsadm

Dom0:~#ipvsadm
FATAL: Module ip_vs not found. 
Can't initialize ipvs: Protocol not available 
Are you sure that IP Virtual Server is built in the kernel or as module?

Doh …missing one… using yast install: cluster-network-kmp-default

reboot the Dom0

Dom0:~#ipvsadm
IP Virtual Server version 1.2.1 (size=4096) 
Prot LocalAddress:Port Scheduler Flags 
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

Config the Virtual L4 switch (LVS) 
Dom0:~#ipvsadm -A -t 131.156.137.33:80 -p -s rr    #listen on 80, make the connection sticky, Scheduler:use round robin load balancing
IP Virtual Server version 1.2.1 (size=4096) 
Prot LocalAddress:Port Scheduler Flags 
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn 
TCP  131.156.137.33:http rr persistent 360 

Config the DomU servers that will back end the LVS L4 switch:

ipvsadm -a -t 131.156.137.30:80 -r 131.156.137.31:80 -g 

Switches explained

-a --add-server Add a real server to a virtual service. -t, --tcp-service service-address Use TCP service. -g, --gatewaying  Use gatewaying (direct routing).  -r, --real-server server-address

ipvsadm -a -t 131.156.137.30:80 -r 131.156.137.32:80 -g

Dom0~#ipvsadm
IP Virtual Server version 1.2.1 (size=4096) 
Prot LocalAddress:Port Scheduler Flags 
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn 
TCP  131.156.137.30:http rr persistent 360 
  -> 131.156.137.31:http          Route   1      0          0         
  -> 131.156.137.32:http          Route   1      0          0 
  
  

This config will be gone with a reboot. Therefore it should be copied to a startup script in /etc/init.d/

create configuration: ipvsadm -S > /etc/ipvsadm.save
restore config in init script: ipvsadm -R < /etc/ipvsadm.save

On the Host Dom0: Enable IP forwarding. This can be done by adding the following

net.ipv4.ip_forward = 1

to /etc/sysctl.conf and then running “sysctl -p”

Now on the Teaming/Teaming (DomU) servers

Bring up 131.156.137.30 on the loopback interface. This is best done as part of the networking configuration of your system. But it can also be done manually. On Linux this can be done using the following command.

ifconfig lo:0 131.156.137.30 netmask 255.255.255.255 # note the ff.ff.ff.ff mask this is not a mistake

DomU1:~ # ifconfig
eth0      Link encap:Ethernet  HWaddr 00:16:3E:49:3E:79  
          inet addr:xxx.xxx.xxx.xxx  Bcast:xxx.xxx.xxx.255  Mask:255.255.252.0
          inet6 addr: fe80::216:3eff:fe69:3e79/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:135213 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10121 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:19157728 (18.2 Mb)  TX bytes:926086 (904.3 Kb)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:100 (100.0 b)  TX bytes:100 (100.0 b)

lo:0      Link encap:Local Loopback  
          inet addr:131.156.137.30  Mask:255.255.255.255
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
		  
		  

repeat on the other teaming server.

Now lets fire up this firecracker

Start up the servers:

Lucene Node 1
Lucene Node 2
MySQL
Teaming 1
Teaming 2

Access the virtual address that you defined above with the browser

Success if you are able to connect and login.

You can monitor the load balancer connection:

Dom0:~# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  131.156.137.30:http rr persistent 360
  -> 131.156.137.31:http          Route   1      0          1         
  -> 131.156.137.31:http          Route   1      0          0  

1 vote, average: 5.00 out of 51 vote, average: 5.00 out of 51 vote, average: 5.00 out of 51 vote, average: 5.00 out of 51 vote, average: 5.00 out of 5 (1 votes, average: 5.00 out of 5)
You need to be a registered member to rate this post.
Loading ... Loading ...

Categories: Uncategorized

Disclaimer: This content is not supported by Novell. It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test it thoroughly before using it in a production environment.

Comment

RSS