S ay goodbye to underutilized hardware platforms, server sprawl and spiraling IT budgets. New virtualization technologies allow you to deploy a few highly scalable, highly reliable, enterprise-class servers to do the same work that used to require a roomful of servers, each running an individual application and sitting idle whenever the application wasn't busy. Virtualization keeps your expensive hardware busy running whatever application needs resources at any given time, so you can do more with less–less hardware to buy, provision and maintain.
And server consolidation is just the beginning. Today's virtualization technologies enable a wide range of usage scenarios, providing benefits such as:
- Rapid application validation and deployment. IT managers can test and qualify software stacks in an isolated "sandbox" that's running in the same environment as the production workload. This enables them to accurately assess the impact on IT resources and network bandwidth, and to roll the validated software out in a matter of minutes or hours–rather than days or weeks. Different target systems can be tested in rapid succession, and since the virtual environment precisely re-creates the target environment, the validated software runs as expected–with no surprises–when it's rolled out to the target machines.
- Application portability. Virtualization can be used to run multiple operating systems on a single workstation. With operating systems abstracted from the hardware layer, the virtualized resources in one environment can be completely inoculated from changes or interference caused by activities in the
other. This creates an ideal platform for porting an application from one OS to another without maintaining two separate systems or risking corruption of one environment by processes running in the other. In a similar fashion, a complete virtualized environment can even be created on a portable drive, such as a USB
drive, to enable users to instantly re-create their personal desktop on different machines–at the office, at home or on the road.
- Dynamic load balancing. Virtual machines can migrate freely between physical servers in a cluster to enable balancing of CPU
utilization, cooling, power consumption, I/O, memory allocation and more. This can be done with emerging technology from Novell that uses policy-based tools to monitor utilization and allow migration. As an example, an online retailer could use dynamic load balancing to shift computing resources to the order
processing and fulfillment systems during the peak holiday season–handling more customers and giving them a more responsive experience without deploying new hardware.
- Failover with minimal service interruption. In the event of a server failure, a virtual machine can quickly be moved to another box to make the OS and applications available again with little or no impact on users. It's even possible to host virtual machines on clustered servers with shared and redundant storage to eliminate all possible single points of failure and achieve near-instantaneous failover.
- Extended life for legacy operating systems and hardware. Virtualization affords a new level of flexibility, allowing you to extend the life of outdated hardware and software. Because you can decouple operating systems from the underlying hardware, you can rehost legacy operating systems and applications on a virtual machine running on the latest available hardware. You can take advantage of the performance advantages of today's hardware, without worrying about OS and hardware compatibility issues. And you can extend the life of your old hardware by redeploying it for another, more suitable purpose.
- Simplified physical infrastructure in a heterogeneous software environment. With Linux, Windows and UNIX, 32-bit and 64-bit operating systems all providing essential business services, virtualization can allow multiple operating systems to reside side-by-side on the same physical servers. The result is higher utilization of hardware resources, and a physical environment that requires less space, power and cooling, and is much easier to manage.
And with new virtualization technologies rapidly emerging, new usage models will arise in addition to these–transforming the entire IT enterprise. Or, to put it more appropriately, virtually the entire enterprise.
> An Old Idea Made Better–A Lot Better
Virtualization is not a new concept. Virtualization techniques were commonly used throughout the 1960s and 1970s to boost performance for shared mainframe systems; however, as microprocessors became ever more powerful and affordable in the 1980s and 1990s, PC servers replaced mainframe and minicomputer systems. The virtualization concept was forgotten as it became practical and affordable to simply deploy a new server whenever IT wanted to deploy a new application or boost the performance of an existing one.
Moreover, as x86-based servers became more affordable and ubiquitous, departments and workgroups became accustomed to "owning" their own servers. When a department needed a new application deployed, they would simply request an additional server and IT managers would oblige. Adding servers became the easy way to manage growth and change, and even today this is still the default choice for many IT managers. But the result for many companies has been an alarming increase in server sprawl, which has become progressively worse throughout the 1990s right up to the present day. As a result, many enterprise servers are utilized at an appallingly low rate–as little as 15 percent capacity–even while data centers are stuffed to the rafters with power-hungry servers. The cost to maintain these large data centers has continued to increase, consuming resources at every level: financial, operational and administrative.
As the costs continue to rise for housing, powering, cooling and maintaining all these servers, many enterprises today are looking to transition to a usage-oriented computing model–that is, an environment where computing resources can be dynamically reassigned to accommodate changing demands. This model requires a holistic approach to infrastructure, with coordinated technologies in both silicon and software that facilitate a dynamic match of resource supply with resource demand.
Virtualization is the keystone in this new usage-oriented computing model. Far beyond the performance gains of first-generation virtualization technologies on mainframes, today's existing and emerging server virtualization technologies promise to bring unprecedented flexibility to the enterprise data center, enabling new levels of cost-efficiency and responsiveness for a wide variety of new usage models.
In short, it's not your father's virtualization. It's way cooler than that–and Novell is helping make it happen, with Xen open source virtual machine monitor (VMM) integrated within SUSE Linux Enterprise 10.
> Intel Virtualization Technology:
Supporting the Next Generation
But before I start tooting the Novell horn, I need to give credit to Intel for opening up a new world of virtualization possibilities with Intel Virtualization Technology.
The model of virtualization that emerged in the late 1990s, driven by VMware and other pioneers of software virtualization, implemented a VMM (also known as a "hypervisor") as a middle layer between the underlying physical server and the multiple operating systems and applications sharing the hardware. In this software-only, "full virtualization" model, the VMM must avoid conflicts by maintaining control of critical platform resources, and handing off control to each guest operating system as appropriate. This requires binary translation–a complex and compute-intensive process of transforming guest OS binaries to handle virtualization-sensitive operations.
A popular technique for increasing virtualization performance is "paravirtualization," in which source-level modifications of guest operating systems are made to create an interface that is easier to virtualize; however, these guest OS modifications require countless hours of work from software developers, integrators and IT managers. In the paravirtualized model, the VMM simply can't run a proprietary or unmodified OS as a guest without encountering conflicts. All possible conflicts must be meticulously programmed out of the equation.
Intel has addressed these issues, enabling Xen and other VMMs to support a wide range of legacy, future and proprietary operating systems without modification. Intel Virtualization Technology is integrated into all of Intel's latest generation of processors, from 64-bit Intel Xeon MP and DP Processors to Intel Itanium 2 and even the Intel desktop (Intel Core 2 Duo) and mobile (Intel Centrino Duo) platforms. The technology reduces the required size and complexity of the VMM improving efficiency and security while allowing the VMM to support unmodified guest operating systems with minimal potential for software conflicts.
Intel Virtualization Technology provides:
- a new, higher privilege layer for the VMM: Guest operating systems and applications can run unmodified in the rings they were designed for, while the VMM retains privileged control over platform resources.
- hardware-based transitions: Hand-offs between the VMM and each guest OS are supported in hardware, reducing the need for complex, compute-intensive software transitions. By maintaining logical isolation of virtual partitions, Intel Virtualization Technology also avoids conflicts and strengthens security.
- hardware-based memory protection: Processor state information is retained for the VMM and for each guest OS in dedicated address spaces. This helps to accelerate transitions and ensure the integrity of each process.
These innovations reduce cost and risk by significantly improving interoperability with unmodified guest operating systems. And with the solid foundation of Intel Virtualization Technology in place, SUSE Linux Enterprise 10 with the integrated Xen VMM is ready to change the look–and more important, the operation–of the data center.
> SUSE Linux Enterprise 10 and Xen Virtualization:
Remodeling the Data Center
SUSE Linux Enterprise 10 offers virtualization capabilities like no other OS. It can provision, deprovision, install, monitor, and manage multiple guest operating systems. It provides the out-of-the-box ability to create Xen virtual machines running modified, highly tuned, paravirtualized guest operating systems for optimal performance. And on Intel Virtualization Technology-enabled servers, it allows you to fully virtualize legacy, proprietary and even future operating systems–including different flavors of Linux, UNIX and Windows. It even provides the flexibility to run several VMs on a single physical server either in isolation or in a virtual network that improves performance by mimicking physical network transfers in much faster memory transfers.
In addition to virtualization capabilities, SUSE Linux Enterprise 10 supports the Oracle Clustered File System (OCFS), and provides outstanding support for clustering. And with the inclusion of Xen, YaST, CIM-based monitoring tools and other built-in, standardsbased management solutions, SUSE Linux Enterprise 10 allows the resources in a server cluster to be pooled, allocated and utilized like never before. In effect, VM management becomes synonymous with workload management. The data center becomes an asset manager that is aware of all physical and virtual servers in the environment and their characteristics, and this information is acted upon in real time to allocate resources as appropriately and efficiently as possible.
With platforms featuring Intel Virtualization Technology and running SUSE Linux Enterprise 10, you can configure a clustered environment that incorporates shared storage management with no single point-of-failure. This design enables high availability for VM hosting, as all VM operating system image files reside in a central location that can be accessed by any server in the cluster. If a physical server should fail, the VMs it hosts can be rapidly failed over to another server. And with future support for live VM state migration–that is, a real-time transfer of live operating system state from one physical server to another–a failover will involve virtually no server downtime. Applications will continue to operate uninterrupted, with little or no noticeable effect on the end user experience.
One of the most important reasons Novell chose to integrate the Xen VMM solution into SUSE Linux Enterprise 10 is performance. Xen originated as a hypervisor for guest operating systems that have been modified for virtualization. This paravirtualization differs from full virtualization products from VMware, Microsoft, Parallels and others that do not require modification of the OS but do impose a significant performance burden. With Xen, several virtual machines, each containing a different modified OS, can run on a single physical system with performance comparable to native code.
As we noted earlier, one of the drawbacks of paravirtualization solutions has always been the work required to modify operating systems to run concurrently without conflict. But now, with the latest generation of Intel processors, the paravirtualization drawback is a thing of the past. Intel Virtualization Technology extends and optimizes the Xen VMM, allowing it to run unmodified operating systems. The combination of Xen and Intel Virtualization Technology allows you to virtualize legacy operating systems, Windows and other unmodified operating systems on top of SUSE Linux Enterprise 10 without the performance hit of full virtualization solutions. You get the best of both worlds–full virtualization and paravirtualization–without the penalties of either.
And you get it all, standard, with SUSE Linux Enterprise 10–the first Linux distribution to be specifically integrated with and tuned for Xen.