Novell Home


Novell Open Enterprise Server 2 has the ability to run in either a physical or virtual environment. Part of the Open Enterprise Server 2 product is SUSE Linux Enterprise Server 10 Support Pack 1 including Xen virtualization. This article will give you information and guidance about virtualization in Open Enterprise Server 2, so you can deploy a virtualized environment with confidence.

The subject of virtual machine technology is very broad. But we'll focus heavily on the benefits of a virtualized environment for Open Enterprise Server 2 using either Linux or NetWare.

> Virtual Machine technology overview
Virtual machine technology is not a new concept and has been available in the market for a number of years. First available on mainframe systems, it gradually made its way onto Personal Computers.

Virtualization has now made the transition into mainstream Network Operating systems such as Linux and Windows and is making significant strides into the data center. In Open Enterprise Server 2, the Virtual Machine technology is provided via the Xensource Xen 3.0 project integrated into SUSE Linux Enterprise Server 10.

Like most Virtual Machine technology, Xen 3.0 comprises a host (also known as DOMAIN 0) and multiple guests (also known as DOMAIN U). The host governs partitioning a physical server and assigning resources requested by each guest. When a guest machine is started and the operating system booted (such as NetWare), the host assigns physical resources to the guest.

Some resources (such as memory) are dedicated to the guest and cannot be shared between guest operating systems. Other resources, such as Network Interface Cards, can be shared between guest operating systems. Or if more than one physical resource is available, they can be dedicated to a specific guest. As more guest operating systems are started, more physical resources are assigned by the host as requested and required.

Xen 3.0 has an advantage over many of its counterparts as it incorporates a “hypervisor” as part of the core virtualization layer. The hypervisor allows a modified guest operating system (such as NetWare or SUSE Linux Enterprise Server) to more efficiently interact with the physical server. This is known as “para-virtualization.” Nonmodified guest operating systems (such as Windows) do not receive this benefit and operate in what is known as “full” virtualization mode.

Despite the efficiencies recognized by the Xen hypervisor technology, there is still a performance “tax” that must be paid in a virtual environment. Virtual machines that share a network card will be unable to use the full bandwidth available. Their overall throughput is governed by how many virtual machines are using the card at any one time and the relative network traffic that is generated. The same is also true of CPU resources. The amount of output you get depends on whether or not the server is running solely from a virtual disk image and how much overall memory the physical server has available.

For more information about the Xen virtualization technology in Novell Open Enterprise Server and SUSE Linux Enterprise Server see For more information about the Xen hypervisor in general see

> When to Virtualize?
Of course, before embarking on any kind of change to an IT infrastructure, research is highly important. One of the most important things to determine is timing. When does it make the most sense for you to deploy a virtualization solution? If any of the scenarios listed in When is it Time to Virtualize? apply to your environment, there are good reasons for working on a virtualization strategy now.

When Is It Time to Virtualize?
If any of the following scenarios apply to any of your IT environments, you should consider a virtualization strategy. It might significantly decrease your costs if

  • Space is at a premium in a data center and more servers need to be deployed.

  • The overall power and cooling costs are running very high for a data center and need to be reduced.

  • A number of servers are reaching the end of their life and need to be moved to a data center.

  • Rolling out a new server operating system takes a long time from test to production.

  • Server maintenance costs are high and need to be reduced.

When it comes to saving power, space and cooling in a data center, virtualization can provide significant savings. Consider a corporation that has eight physical servers deployed in a data center—four acting as a primary service and four as a backup service. The four backup servers could easily be moved to a virtualized environment on a single physical server. Clearly, the amount of power and space saved could be quite significant, and this is not the end of the savings. The amount of weight saved by virtualizing on a four-to-one basis can allow for the installation of a new SAN or Blade rack.

> When Not to Virtualize
When laying out a virtualization strategy, an often overlooked aspect is “what should I not virtualize?”
Consider the following examples:

  • A new generation multicore server is being installed to increase the performance of a database.
  • Current megabit switches are no longer fast enough to handle the network traffic to a server.
  • More storage needs to be purchased as a new application is generating files hundreds of megabytes in size.

The above examples may not be ideal candidates for moving to a virtual machine, particularly because the utilization patterns may be unpredictable and include periods when the server is running at or near maximum capacity.

While these types of servers can be virtualized, the differences in performance and/or scalability between the physical and virtual world means they will probably not perform as well in a virtual server.

Xen 3.0 has an advantage over many of its counterparts as it incorporates a “hypervisor” as part of the core virtualization layer. The hypervisor allows a modified guest operating system (such as NetWare or SUSE Linux Enterprise Server) to more efficiently interact with the physical server. This is known as “para-virtualization.”

> Workload Analysis
To determine a good candidate for virtualization, analyze the actual workload of the server. For a list of factors to help you evaluate the server you’re considering for virtualization, see Factoring in Factors.

This last metric will be a composite of data points including the number of reads and writes performed, overall data throughput, numbers of files read and written to the disk, etc.

Ideally, any analysis should be run over a period of several days or weeks and the data averaged over that period. When averaging the data, it’s useful to show not only the data points collected, but also the times of day the data has been collected.

During data analysis, make sure the statistics for each server are kept separate from the other servers. If you blend the results, it will be difficult to determine which of your servers make good candidates.

Factoring in Factors
When you’re deciding which servers to virtualize, analyze the network traffic to see which servers will make the best candidates. Use the following statistics to help you decide:

  • Average CPU utilization (as a percentage of available CPU resources)

  • Maximum CPU utilization (including peak duration)

  • Average Network utilization (as a percentage of available bandwidth)

  • Maximum network utilization (including peak duration)

  • Average memory utilization (as a percentage of available memory)

  • Peak memory utilization (including peak duration)

  • Disk channel utilization

> Choosing the right workload
By the time you’ve finished analyzing your workloads, you will have a solid set of data to work with and a great understanding of exactly what your servers are doing and when. Now all you have to do is make sense of the information, so you can start virtualizing some of your workloads.

The easiest way to start is by excluding those servers that are showing high utilization in one or more of the observed subsystems. A server that consistently shows high CPU utilization or frequent peaks could be discounted, especially if these peaks fall during a working day. Peaks that coincide with backup, antivirus or other types of housekeeping still have to be considered.

A server that averages 2–7 percent utilization of the CPU with an occasional peak would work out well in a Virtual Machine. Of course, if you are migrating from an older system running, for instance, a single core CPU around 1.8-2.0 Ghz to a newer dual or multicore system running at 2.2 Ghz or higher, the CPU percentages are somewhat skewed. The new system will likely be able to handle several Virtual Machines running low utilization workloads.

If CPU utilization is low, check the network bandwidth to see how it is being used. If your plan is to bridge all of your Virtual Machines to a single Network Interface Card, you will see an overall decrease in available bandwidth to each Virtual Machine. Similarly, if you see that a server is handling a fairly high load over the card, you would do well to consider leaving it as a physical server.

When it comes to saving power, space and cooling in a data center, virtualization can provide significant savings.

An often overlooked aspect of virtualization is the amount of memory required. Just because you have moved a server to a Virtual Machine it does not mean it needs less memory. The physical host server not only needs enough memory to run all of the Virtual Machines when they are loaded, but it also needs memory for the host operating system. The amount of memory reserved for the host will vary depending on the number of guests that will be running and if you intend to run more than just the Xen hypervisor.

The final element to consider when choosing a workload is the utilization of the disk subsystem. Virtual Machines can use a physical disk or a virtual disk file. Physical disks give, by far, the best performance, but are also the least convenient to configure and maintain when you are hosting multiple Virtual Machines. Virtual Disk files are the most convenient; however, they will be slower.

Clearly, any disks attached over iSCSI or remount mount points will work exactly as they do in a physical environment, but pay particular attention to utilization of the network interface card. If you have a very busy iSCSI connection, consider some optimizations to your Virtual Machine.

> Scalability considerations
Despite the various reasons for declaring that some workloads do not loan themselves to being virtualized, buying the right server configuration and tuning your Virtual Machine can pay dividends.

One of the simplest things you can do to ensure good performance is to buy a server that has either an AMD-V or Intel-VT CPU. Xen is designed to take full advantage of the very latest CPUs from both Intel and AMD, and you would be well advised to invest in a server featuring these new CPUs.

If you are running multiple Virtual Machines and are at all concerned about network performance, then buy more network interface cards. The Virtual Machine configuration allows you to dedicate a network card to a single Virtual Machine. This can be of particular use when running an iSCSI connection, because it allows a dedicated card to carry network traffic to a SAN or NAS device.

> Summary
With ever-increasing demands on power and cooling, and the sheer cost of running the modern data center, deciding to adopt a virtualization strategy is an astute move for most IT administrators today. Understanding how virtualization works and the trade-offs inherent in any type of virtualization environment are vital prior to purchasing equipment or moving the first server into a Virtual Machine.

© 2015 Micro Focus