How to backup an OES2 cluster (NSS volumes) using Commvault
We’ve been fighting to get Commvault to run in a clustered environment. The Commvault documentation is sketchy at best and different manuals give different procedures. This is the full installation and configuration procedure.
You’ll either have to mount or transfer the Commvault for Unix installation disc to all the physical nodes in the cluster. We don’t have a physical disc and in this procedure I’ll assume that the full disc is SCP’ed to /root/Simpana. After you’ve transferred the data make sure you perform a chmod 750 Simpana –recursive from /root to allow the execution of all the installation scripts. The reason why the directory is called Simpana and not simpana is that there are some scripts which require the capital letter.
I’m assuming that all nss volumes are mounted under /media/nss and that you have a backup user and a backup group in eDirectory which has the appropriate rights to the volumes that need to be backed up.
Another prerequisite is that LUM is enabled and configured on the cluster nodes. By default we allow all available PAM-services to ben enabled. If you want to restrict it, test which services should be enabled.
cd /Simpana to enter the directory and execute ./cvpkgadd to start the installation.
Press the enter key, and then again in the next screen. Press q and then y
Choose option 1: Install on a physical machine
Enter the physical machine host name or directly press enter to accept the default setting
Enter the physical machine client name (display name in Commserve)
Select the agent to be installed, we don’t backup the file system of the server (we just make a copy of the entire VM) so in our case we just install the Novell OES Linux FS IDA.
Follow through with the installation procedure. The base services and the IDA will be installed.
After it’s done installing you’ll return to this menu. Enter the appropriate number to exit. Simpana will install the postpack and return you to the main menu. You’ll have the option to install Simpana on a virtual machine;
Enter 2 and press enter. The installation procedure for the virtual machine is exactly the same as for the physical machine with one exception, the jobresults directory needs to be different. Just place it on the cluster volume (/media/nss/SCRATCH/Galaxy/JobResults). Repeat the installation for all the different virtual servers. After we were done in our environment it looks like this:
After you’re done on the first physical cluster node, repeat the entire procedure for the next physical cluster node and keep doing this until all the nodes have all the virtual servers installed.
All done with the Commvault install!
Open up the CommCell GUI and log in. Expand the view for the cluster resource, right click OES Files System and click Properties. In the dialog box that follows you’ll see that the user account being used to authenticate looks pretty weird. Select Change Account and enter the credentials for the backup user.
IMPORTANT NOTE: If you install another physical machine into the cluster the authentication information is reset to the garble stuff Commvault wants to use and you’ll have to reconfigure it!
Next, click on the defaultBackupSet under OES File System and double click default in the top right pane. Click add paths and enter the path to your clustered NSS volume (/media/nss/SCRATCH in our case):
Click OK and ignore the message about autodetection.
Don’t forget to configure your backup schedules and policies!
iManager: LUM enable the backup user and configure the resource scripts!
Now open your browser of choice and open iManager. Go to Linux User Management and select Enable Users for Linux. Browse to the designated backup user account and press next. Now select the backup group (or select an existing LUM-enabled group) and press next. Select all the appropriate servers and select a Unix Config object.
Test your configuration by firing up putty and connecting to one of the nodes:
All done here! Only thing left is to configure auto failover. Commvault uses a small application that notifies the backup server that the virtual machine is being moved to a different physical machine.
Go to the Clusters option and select Cluster Manager. Point to your cluster and click on the resource you want to edit. Click the Scripts tab and enter the following line:
exit_on_error /opt/simpana/Base/cvclusternotify -inst Instance001 -cn ehl-lxc02_scratch_server -start
Replace ehl-lxc02_scratch_server with your virtual server name as displayed in the Commcell GUI. Click apply and go the Unload Script tab. We put the notification directly under the . /opt/novell/ncs/lib/ncsfuncs line:
ignore_error /opt/simpana/Base/cvclusternotify -inst Instance001 -cn ehl-lxc02_scratch_server –shutdown
Click apply and return to the Cluster Manager. Changes in cluster load/unload scripts are only activated after you’ve taken the resource offline. Offline the resource and online it on the preferred node.
You can see the node changing in Commcell after you’ve pressed F5. Right click the virtual server and select properties. The Active Physical Node should now be displayed correctly.