Novell Home

 

Acouple of years ago, Novell made a big investment in GroupWise 6.5 by porting it to the Linux platform. With GroupWise 7 SP2, support is being added for Xen virtualization and Heartbeat2 clustering technologies. These changes are opening up a world of new technology to GroupWise system administrators that allow GroupWise messaging systems to be deployed in increasingly robust and flexible configurations.

GroupWise administrators have to deal with storage management issues caused by growing mailbox sizes and pressures to consolidate servers in crowded data centers cluttered with increasing numbers of servers. Linux technologies available in SUSE Linux Enterprise Server help GroupWise administrators address these challenges.

Virtualization—the ability to host multiple virtual servers running different operating systems on the same hardware

One of the great technologies available to GroupWise administrators that deploy GroupWise on SUSE Linux Enterprise Server 10 or Open Enterprise Server 2 is Xen virtualization. Virtualization—the ability to host multiple virtual servers running different operating systems on the same hardware—is a term becoming increasingly familiar to IT administrators. Full virtualization basically supports any operating system out of the box.

Xen paravirtualization requires that the operating system have support added for Xen virtual device drivers. Paravirtualization is a high performance approach to virtualization. It is different from full virtualization in that it requires the guest operating system to be modified to support paravirtualization. This creates an optimal virtual environment to minimize the overhead introduced by virtualization technology.

Currently, SUSE Linux Enterprise Server 10 SP1 supports the following operating systems in paravirtualized mode:

  • SUSE Linux Enterprise Server 10
  • SUSE Linux Enterprise Server 10 SP1
  • Open Enterprise Server 2 Linux
  • Open Enterprise Server 2 NetWare

When coupled with clustering software, such as Heartbeat2, Xen virtual machines can be configured as cluster resources that can fail over to other physical servers to provide a high availability solution as well. Additionally, the use of management tools, such as Novell ZENworks Orchestration Server, allow virtual servers to be managed by policy.

For example, ZENworks Orchestration Server lets you migrate virtual servers in real-time from one physical server to another when certain conditions are met, such as CPU utilization exceeding a defined limit.

Xen Memory Management

One of the great things about Xen is the ability to control how many resources are used by domain 0 and how many are available for virtual servers. When Xen starts, it automatically allocates all available memory to the domain 0 server. Then, as virtual servers need memory, Xen frees memory from domain 0 and allocates it to other Xen domains. Using the Virtual Machine Manager you can set how much memory to allocate to a virtual server when it starts and reduce the amount of memory allocated to a running virtual server.

To specify how much memory is available to Xen domain 0, you must modify the kernel boot parameters in /boot/grub/menu.lst. Add the parameter dom0_mem=512m to the kernel command for booting the Xen kernel, as follows, to allocate 512 megabytes of memory to domain 0. This will force domain 0 to use no more than 512 megabytes of memory. kernel /boot/xen.gz dom0_mem=512m

You can also specify the minimum amount of memory that domain 0 can have or configure domain 0 so it will never release memory to running virtual servers.

Modify /etc/xen/xend-config.sxp as follows:
To set the minimum memory level for domain0 at 128 megabytes of memory: (dom-min-mem 128)

To keep domain 0 at a fixed amount of memory (Must be used with dom0_mem= parameter): (dom-min-mem 0)

Deploying GroupWise in virtual servers using Xen allows multiple GroupWise post offices to be hosted on the same physical server while controlling resources provided to each post office. This decreases the number of servers needed to host a GroupWise system and thus effectively speeds consolidation of GroupWise post offices.

As servers have increased in performance ability over time, the relative cost of hardware has decreased while GroupWise administrators generally continue to run one GroupWise post office per server. Combining GroupWise post offices requires administrators to move individual user mailboxes from one post office to another, a potentially time-consuming task for large numbers of users.

With Xen virtualization technology, you can migrate GroupWise post offices easily to multiple virtual servers running on the same physical server using the GroupWise Server Migration Utility, eliminating the need to move individual user accounts to a common post office.

In this article, I will discuss the background information necessary to preparing Xen virtual servers to host GroupWise. The process includes the following:

  1. providing a storage solution for the GroupWise data
  2. installing SUSE Linux Enterprise Server 10 SP1 or Open Enterprise Server 2 for Linux
  3. configuring the server and booting it with the Xen kernel
  4. creating, configuring and managing a Xen virtual server
  5. migrating to or installing GroupWise in the Xen virtual server

In a future article, I will address how to use Heartbeat 2 to create a high availability GroupWise Xen solution using clustering.

> Data Storage Considerations
When considering how to implement GroupWise in a virtual server environment, give careful consideration to data storage. Xen virtual servers can host storage in a variety of ways. They can store data internally by creating file partitions inside the virtual server. Any data stored internally will be lost if the virtual server image file is corrupted or damaged. Storing data inside virtual servers is not recommended!

They can also access storage in the same way as any other Linux server by mounting them as file systems from direct attached storage, from Fibre Channel SANs or using protocols such as iSCSI. iSCSI is a relatively inexpensive SAN solution that performs quite well when properly configured and is much less expensive than Fibre Channel SAN solutions. If using local direct attached storage, then you won't be able to migrate Xen virtual servers to other physical servers, but can use them to host multiple GroupWise post offices on the same hardware without fail over capabilities.

 

When using shared storage solutions such as iSCSI or Fibre Channel SANs, you can obtain the maximum benefits of virtualization technology. Configuration of the virtual server can be complex or simple depending on the solution you choose. If using iSCSI to connect to an iSCSI target, configuration is straightforward. If connecting to local or SAN disk storage, the Xen virtual server configuration will need to be modified to allow the Xen virtual server to access the disk device.

Xen virtual servers are typically created as sparse files of 4 GB. Back these files up periodically because if they are corrupted for some reason, the virtual server will no longer boot. Planning disk space allowing for copies of the virtual server image files is an important part of disk storage planning. Provide disk space on a separate partition to store a copy of all your virtual server image files.

> Installing and Configuring the Xen Domain Server
You must select and install a group of software packages labeled as Xen Virtual Machine Host Server to a SUSE Linux Enterprise Server 10 or SP 1 server to enable the server to function as a Xen domain.

To install Xen when installing SUSE Linux Enterprise Server 10 or SP1, add Xen Virtual Machine Host Server from the software selections to the server software being installed. If your server is already installed with SUSE Linux Enterprise Server 10 or SP1, launch YaST and open the Software Management module. Under the Filter list choose Patterns and then select the check box for Xen Virtual Machine Host Server.

> Booting into the Xen Kernel
Xen virtual servers can only be launched inside a server running the Xen kernel. Xen architecture is beyond the scope of this article; however, many good articles are available on novell.com that describe it. A server running the Xen kernel is referred to as domain 0 and Xen virtual servers running in domain 0 are referred to as Xen domains.

After installing the Xen packages, ensure that the network is configured properly with a static IP address before rebooting into the Xen kernel. You should use the Boot Loader module in YaST to set the Xen kernel image as the default boot loader, so whenever the server is rebooted it will automatically boot into the Xen kernel, otherwise you can't run Xen virtual servers.

> Creating Xen Virtual Servers
Xen contains many powerful features allowing a variety of complex virtual networking and virtual storage configurations. I'll explain a simple configuration that allows a GroupWise system to be hosted in Xen using either a disk device or iSCSI. Once booted into the Xen kernel, use the YaST modules for Xen virtual machines to create and manage virtual machines.

In SUSE Linux Enterprise Server 10 SP1, these modules are located under the Other page. (see figure 1.) The first time you use the Xen modules, you must install the Xen management tools. Launch the Install Hypervisor and Tools module to do so. Next, use the Create Virtual Machines module to create a virtual server. Use the "I need to install an operating system" option to create a SUSE Linux Enterprise Server 10 server.

The Summary Screen allows you to change default configuration settings, such as the name of the server, the amount of memory and number of processors available to the virtual server. (see figure 2.) The Disks option in the Summary Screen also allows you to specify the location of the virtual Hard Disk server image file and its maximum size. This file contains the virtual server you are installing.

The Network Adapters dialog allows you to specify a MAC address if you want. If you'll have a large number of virtual servers on the same network segment, consider manually assigning the MAC addresses to avoid conflicts, otherwise leave the default setting of a randomly generated address.

The installation source for the operating system is added in the Operating System Installation options dialog. (see figure 3.) In this section you either use a network URL or a disk image such as a CD or an ISO image of the installation files. You can add a disk device as the installation source by selecting Virtual Disk and browsing to the CDROM or DVD drive, or specifying the location of an ISO file.

Once this is done, you can start the installation and create the virtual server.

The installation process uses VNC remote administration technology to provide a GUI installation console and experience that is the same as installing SUSE Linux Enterprise Server onto a physical server. You can control and modify installation options as you normally would during a SUSE Linux Enterprise Server install. One minor difference is that the mouse is driven by the VNC process, so sometimes you may experience a double mouse cursor.

 

> Managing and Configuring Xen Virtual Servers
Once the virtual server is created, you can manage it from a graphical console using the YaST Virtual Machine Manager module. (see figure 4.) You can start and stop the virtual server and also modify some basic configuration parameters. Select a virtual machine in the Virtual Machine Manager and click on details to access controls to start, shutdown and pause a virtual server.

Additionally, you can dynamically change the memory allocation and number of virtual processors from this control module. (see figure 5.) The Virtual Machine Manager also allows you to open a GUI console to a running virtual server using VNC Remote Administration.

Xen also has powerful command line interface tools to perform all of these basic functions and more advanced functions as well. It is well worth the time to learn these commands. The main command is xm and the most used options include xm create, xm shutdown, xm list, and xm console. Use xm create and xm shutdown to start and stop virtual machines, xm list to display running virtual machines, and xm console to open a text console to a running virtual machine. (See Xen Command Line Tools for details.)

A Xen virtual server consists of two basic parts:

  • a virtual Hard Disk or server image file
  • a text configuration file.

The configuration file contains the parameters Xen needs to launch the virtual server and configure its virtual hardware environment. Additionally, you can modify this file to permit access to disk devices. By default, the server image files are located in /etc/xen/images, which is a symbolic link to /var/lib/xen/images. The configuration files are located in /etc/xen/vm and can be modified manually.

Note that Xen 3.0.4, which comes with SUSE Linux Enterprise Server 10 SP1, stores a copy of the configuration file in the xenstore database. You must import any changes that you manually make to a configuration file to the xenstore database using the command xm delete to remove the original configuration and the command xm new to import the new configuration.

Back up these files to a separate disk partition for disaster recovery. Don't copy the Xen virtual server image files while the virtual server is running, otherwise it will be in an inconsistent state.

> Configuring Xen for Disk Access
Many servers need access to data storage, as in the case of GroupWise. A virtual machine will need the ability to mount a data partition from an iSCSI partition, a local disk or from a SAN and should not store data in the virtual machine image itself. Access to iSCSI partitions is provided by running an iSCSI initiator in the virtual server and configuring it using YaST. You control access to physical disks using optional parameters configured in the Xen virtual server configuration file.

Virtual servers can only access physical disk devices if they are configured to do so. To grant access to a physical disk device, modify the disk parameter in the Xen configuration file. For example, change the default disk entry created by Xen from:

disk = [ 'file:/var/lib/images/vm-gw1/disk0,xvda,w' ] by adding phy:/dev/sda1,sda1,w to this parameter. For example, the following configuration will pass sda1 to the Xen virtual server as sdb1. disk = [ 'file:/var/lib/images/vm-gw1/disk0,xvda,w','phy:/dev/sda1,sdb1,w' ]

If you are using a SAN, there is a problem with using typical device names such as /dev/sda, /dev/sdb, etc. There is no uniform way to ensure that a disk device is presented with the same name on a different physical server accessing the same SAN. For example, a SAN disk device might be named /dev/sdb on one server and /dev/sdc on another.

To work around this problem, the disk device can be referenced using a device ID. Linux uses a permanent identifier to map the disk device ID to a device name. The id's of disk devices are listed in the /dev/disk/by-id/ directory. An example follows:

lrwxrwxrwx 1 root root 9 2007-01-03 15:26
scsi-3600805f300007f40a510b029e4dc000e -> ../../sda
lrwxrwxrwx 1 root root 10 2007-01-03 16:44
scsi-3600805f300007f40a510b029e4dc000e-part1 -> ../../sda1
lrwxrwxrwx 1 root root 9 2007-01-03 15:26

The actual disk partition ID can be used in the Xen virtual machine configuration file to enable access for the virtual machine to the disk partition. The configuration file needs to be modified as shown in the following examples. The first example shows how to grant access to an entire LUN as a whole disk, the second shows how to pass a partition of a LUN. Normally, only a LUN will be passed.

# whole LUN
# disk = [ 'file:/var/lib/images/vm/disk0,xvda,w',
'phy:/dev/disk/by-id/scsi-3600805f300007f40a510b029e4dc000e,sdb,w' ]

Once the physical device is accessed from a virtual machine, it can be formatted and mounted inside the virtual server, as if it were physically present on the server.

> Now What?
Once you have created a virtual server and have it running, you can now connect to the server via SSH and manage it and configure it as you would any Linux server. You can configure iSCSI to connect to data storage to store a GroupWise post office, or you can optionally configure Xen to mount a physical disk device on a SAN or on local disks as described previously. You can then run the GroupWise Server Migration Utility to migrate a GroupWise post office or domain to the virtual server.

You can also deploy Xen in a high availability solution using shared storage and clustering software. As I said, this will be addressed in a future article. So stay tuned.

Xen Command Line Tools

Like most other Linux applications, Xen has a powerful command line interface. For a comprehensive list, use the man page, of course! The most useful commands are as follows:

xm list
This returns a list of running domains with information about their state and resource utilization

 
Name ID Mem VCPUs State Time(s)
Domain-0 0 473 1 r——- 5158.0
cm-4 4 512 1 -b—— 53.9

The name of the domain is specified in the Xen configuration file, as well as memory and number of Virtual CPUs (VCPUs). Never configure a virtual server with more VCPUs than actual physical CPUs.
xm create -c <config file>
This command creates or starts a Xen virtual server and opens a text console to the server. (The -c option opens the console.) To exit the console window, press the CTRL + ] keys.
xm console <domain name>
This command opens a text console to the Xen virtual server, similar to the -c option when starting the domain. xm shutdown <domain name>
This issues a shutdown command to the virtual server.
xm destroy <domain name>
This command terminates a domain immediately and is useful if a virtual server stops responding for some reason.
xm top
This provides resource monitoring in real time for running domains and is useful to see CPU utilization and other statistics


© 2014 Novell