Novell Home

Novell Connection Magazine Home

Why
Virtualize

Cutting through the hype to understand how virtualization can truly transform your data center

  • Audio

At Novell, we depend on virtualization every day to make our own data centers run more effectively and return more value to our business. That’s why we’re running a four-part series in Novell Connection describing some of the virtualization techniques we use for data center automation—techniques you can use to achieve the same benefits we’ve enjoyed.

This first article in the series discusses some of the decisions you need to make and the processes you need to implement to build an effective virtualized environment. We also give you an overview of the solutions Novell uses to implement those decisions and processes. Then, in each of the next three issues of Novell Connection, we’ll take a deep dive into those solutions, showing you how to put them to work to address specific needs.

Virtualization and the “Fluid” Data Center

Why virtualize? You’ve probably heard all the standard reasons: maximizing your server investment; lowering server refresh costs; and reducing the cooling, electricity and floor space required by the data center.

With a typical enterprise server running at perhaps 15 percent resource utilization, virtualizing and consolidating multiple services on one physical server can yield up to a seven-fold increase in efficiency in all these areas.

At Novell, our virtualization initiatives support these straightforward goals. But we also use virtualization to achieve a more fundamental benefit: a “fluid” data center that automates our ability to devote computing resources and network bandwidth precisely to the tasks where they are needed at any given time.

For example, here are a few of the ways we use virtualization to keep data flowing as business and technical conditions change from day to day:

  • Hardware upgrades. When we need to upgrade the hardware on a given server, our virtualized “fluid” data center allows us to move services over to a virtual machine, do the upgrade, and move the services back—all in a matter of minutes and with no disruption to the users who depend on availability of the services.
  • Capacity optimization. When a service is maxing out CPU, memory or storage capacity on one physical server, we can move the workload to a more capable server on the fly—without worrying about compatibility issues. Conversely, if a service is using only a fraction of physical capacity, we can place additional services on that box.
  • Image deployment. Once we’ve built a virtual machine to run a configured and tuned operating system and application, we can save it as an image and then deploy it to other boxes without redoing the configuration work for each different system.
  • Protection of legacy investments. There might be times when software or hardware on which we rely is no longer supported or available. We can preserve the value of legacy systems indefinitely by creating an image of the complete legacy environment and running it as a virtual machine on a current platform.
  • Business process management. We use virtualization to help manage cyclical business processes—moving processes on the fly, for example, to accommodate the strain on financial applications at the end of each fiscal quarter. We can even do this automatically by policy.

These are just a few virtualization scenarios that work for us; and they’re tactics you can apply in your own data center automation projects. But there’s nothing generic about these virtualization tactics. It’s not just a matter of throwing more processes onto a server until it has reached nearly full capacity.

Instead, you need to base tactics on a three-pronged strategy. First, you need the ability to discover your technical assets and provide visibility into their operating behavior. Second, you need an efficient way to create virtual machines that work on your choice of physical systems. And third, you need a way to manage and orchestrate physical resources and virtual machines in continuous adaptation to your changing business and technical requirements. Here’s a high-level overview of the solutions we use at Novell to realize this strategy.

Discover

Before you can virtualize effectively, you need to know what you have. That includes both the operating systems, applications and services that may be candidates for virtualization as well as the hardware that might be best suited for hosting virtualized environments. But just knowing what you have isn’t enough. You also need to know how everything behaves—including resource utilization and trends over time, both within each system and in comparison with other systems.

Subscribe to Connection Magazine


At Novell, we use PlateSpin PowerRecon to provide complete and precise details on our available assets and how they are being utilized—and to graph utilization across processes and machines for intelligent analysis. In the December issue of Novell Connection, we’ll take a closer look at the discovery process and how PowerRecon can help you plan an optimum virtualization strategy.

Create

Once you have created a virtualization strategy, the next step is to create virtual machines. This involves several issues, including whether any given service is fully virtualized or paravirtualized, the hypervisor that will be used, the method used to create the virtualized image, the optimum size of the image, how to connect to SAN storage, and how to move the service from its current home to the virtual world.

At Novell, we use the XEN hypervisor running on SUSE Linux Enterprise Server, providing a completely flexible, open-source platform that offers reliable performance no matter what OS, applications and drivers are included in the virtual machine. And we use Platespin PowerConvert to automatically create virtual machine images that will run on our choice of hardware, under our choice of OS.

We’ll discuss PowerConvert and XEN in more detail in the December issue of Novell Connection, giving you insights into the virtualization decisions and creation process we use here at Novell.

Manage/Orchestrate

With virtual machines in your environment, you need an effective way to orchestrate processes and manage resources. For example, you need solutions for tasks such as moving an OS and applications from an old server to a new server, moving from an over- or under-utilized server to a right-sized server, adding a server to a computing cluster to meet increasing demand, and so on—all without losing information or interrupting users.

At Novell, Platespin PowerConvert continues to play an important role in this phase, allowing us to create and move virtual machines as needed without affecting user productivity. In addition, ZENworks Orchestrator acts as the “brains” of our data center automation system. We use it to manage virtual machines, identities, physical servers and storage in a coordinated and intelligent way according to workload requirements, hardware health and business policies.

In the January issue of Novell Connection, we’ll give you a more detailed look at the roles of Orchestrator and PowerConvert in automating day-to-day operations in the data center.

Data Center Automation

Data center automation is a goal in which physical boundaries no longer apply: resources are automatically assigned to workloads according to dynamically changing needs; physical failures are unnoticeable to end users; and identity management, storage management, system management and virtual machine management are all tied together and automated across the IT environment.

We’re well on the way toward achieving these goals in our own IT environment, and we hope you’ll join us in the months ahead on this innovative and promising journey.



© 2014 Novell