Data centers are being squeezed by a variety of internal and external pressures such as power, HVAC, new servers, human errors, patching, asset tracking and more. In fact, the average data center consumes enough power in a month to power 1,000 homes! On top of all this, you have to keep up with dynamically changing business requirements. You need a solution that will allow you to align IT to your business, control costs and minimize risks. Data center managers are looking for a variety of ways to address these dilemmas. One of the key ways is server consolidation using virtualization.
"Data Center Managers are on the hot seat lately. They not only have to cram more servers per square inch than they ever thought they'd need, they also have to figure out how to do it without sending the electricity bill through the roof."
For a quick history of virtualization, see the section named An Old Idea Made Better–A Lot Better in the article Virtualization: It's Real. It's Here. It's Now. It's Xen
"Virtualization in and of itself is interesting, and it gives you server efficiency, but without some of the automated tools, it may actually increase your management burden."
> Data Center Automation from Novell
Novell has launched a new strategy to build a mixed-source-based platform that offers value thru sophisticated integration of otherwise isolated components. This solution identifies the workloads shown in (Figure 1.). Consider the evolution of computing from mainframe to mini to client/server. Now modularize, standardize, commoditize and virtualize. Next, add integrated intelligence and you have a modern "computer" comprising virtualized computing and storage that is controlled by a distributed operating system realized by grid-inspired resource management software. This new solution enables mainframe-class capabilities for commodity scale-out data center architectures. All workloads are supported by a common modular Linux foundation: SUSE Linux Enterprise, although all major virtualization platforms will be supported.
Commercial high-performance cluster computing, data center and enterprise workgroup workloads will run inside "virtualized" data centers. (See Figure 2.) Users connect to the network using workstations, whether they are fixed location desktop or mobile devices. Eventually, parts of the desktop software experience will also be hosted and managed by data center servers thru virtualization-enabled provisioning of user machines, onto dynamically repurposed servers, and connected to next-generation thin-client terminals.
"For virtualization to truly work in real-world applications, users must also focus strongly on automation, the policy-based administrative tools used to deploy virtualized instances and manage them."
Novell's first data center automation solution manages compute and storage servers on behalf of applications or services hosted in virtual machines. (Figure 3.) illustrates three primary types of servers running in the new data center.
I. Compute servers
II. Storage servers
III. Management servers:
B. Storage Resource Manager
C. Universal Model Facility
D. Image Creation
E. Image Repository
What grids offer is an ease of letting compute power flow to wherever it's needed instead of being statically allocated by the capital spending of particular business units. The enterprise data center is well on its way to becoming a supplier of service rather than a custodian of hardware.
Today's confluence of commodity components, burgeoning bandwidth and open source systems software fills in the rest of the picture. Taken together, they make the enterprise case for grid computing, which is the connection of heterogeneous computing nodes using self-administering software that makes the nodes function as a single virtual system.
There are five main management server functions; all functions could be installed on a single physical server, in separate virtual machines, or separate servers. Management servers will be clustered for high availability. The resulting management cluster is responsible for orchestrating compute and storage servers with respect to allocatable units of application-specific memory, compute and storage capacity declared by individual virtual machine instantiation and deployment constraints.
Compute servers are industry-standard (rack-mount and blade) servers with multi-core 64-bit CPUs, multi-GB memory, serial-attached RAID, Ethernet and SAN ports, plus embedded hardware that supports out-of-band intelligent platform management interface (IPMI).
I. Compute Servers
Compute servers are industry-standard (rack-mount and blade) servers with multi-core 64bit CPUs, multi-GB memory, serialattached RAID, Ethernet and SAN ports, plus embedded hardware that supports out-of-band intelligent platform management interface (IPMI). Next generation CPUs will provide hardware support to improve upon today's software-based server virtualization. Compute servers run an appropriate OS for the physical hardware architecture, comprising of a virtual machine monitor (such as Xen hypervisor), device drivers, management kernel and agents. Management agents support remote deployment of virtual machines to be executed by the hypervisor also present on every compute server. Compute servers may be grouped together and organized by type (for example, thin blades versus thick SMPs), intended purpose (for example, test or production), owner, physical location and other classification. They are named with a globally unique identifier. Finally, compute servers can function in isolation, or they can cooperate with other compute servers to create high-availability clusters.
II. Storage Servers
Storage servers are industry-standard SAN disk-block storage arrays or file servers. Storage is pooled and protected. Storage is accessed by compute servers on behalf of virtual machines. This is a dynamic relationship; storage is managed with respect to the life-cycle of individual virtual machines. Just like compute servers, storage is organized by type (for example, available RAID5 disks), purpose (for example, temporary, protected or remotely replicated), and owner.
Storage servers are industry-standard SAN disk-block storage arrays or file servers. Storage is pooled and protected.
A configuration management server extends an existing Novell product that uses policy-driven automation to deploy, manage and maintain data center servers.
Many customers have already made a storage infrastructure commitment and want Data Center Automation tools to support that investment. Industry standard SMI-S* enables third-party management of heterogeneous storage. The system will manage whatever enterprise storage has been assigned to it, for example, a portion of an existing SAN or the entire SAN if dedicated to compute and storage orchestration.
(*Note: According to Wikipedia.com, SMI-S, or the Storage Management Initiative - Specification, is a storage standard developed and maintained by the Storage Networking Industry Association (SNIA) and is a model, or guide to building systems using modules that plug together. SMI-S-compliant storage modules interoperate in a system and function in consistent, predictable ways, regardless of which vendor built them, provided that the modules use Common Information Model (CIM) language and adhere to sets of specifications called CIM schema. The main objective of SMI-S is to enable broad interoperability among heterogeneous storage vendor systems.)
III. Management Servers
A configuration management server extends an existing Novell product that uses policy-driven automation to deploy, manage and maintain data center servers. The management server provides centralized control of the life cycle of operating systems with imaging, remote control, inventory and software management. With respect to data center automation, it provides imaging of physical systems onto compute servers plus a global namespace (hardware asset inventory) of all managed compute servers.
This namespace plus any hierarchical structure created by the data center administrator, for example, organizing servers into groups, will be federated with the Universal Model Facility (UMF; also see Universal Model Facility below) to support CIM-based server health monitoring. The management server can track the creation of virtual machines, assuming installation of a virtual machine image that contains the management agents. The management server considers virtual machines to be managed assets in the same way as physical servers. Virtual machines, once created, will also appear in the man-aged server namespace. Once the systems are in a managed state, you need a way to orchestrate them to align to business needs.
The Orchestrator is the brains behind the data center automation system; it interacts with the configuration and storage resource management servers to manage physical compute and storage resources and the relationships between them. The Orchestrator also manages virtual resources. It's responsible for the entire life cycle of individual virtual machines comprising control information, OS image and storage resources from initial creation to deployment and monitored execution, to final destruction. Physical constraints, dependencies, live performance trends and other real-time execution states monitored by the UMF are considered by the Orchestrator when scheduling virtual machines to compute servers for execution.
B. Storage Resource Manager
The storage resource manager component is responsible for managing SMI-S-enabled storage arrays. The manager is an automounter for SAN LUNs. Compute servers will dynamically access SAN storage with respect to the virtual machines that are scheduled to run on them. The manager also supports provisioning of SAN LUNs when creating a new virtual machine.
SUSE Linux Enterprise 10 offers virtualization capabilities like no other OS. It can provision, deprovision, install, monitor and manage multiple guest operating systems. It provides the out-of-the-box ability to create Xen virtual machines running modified, highly tuned, paravirtualized guest operating systems for optimal performance. What's more, SUSE Linux Enterprise Server can play host to several guest OSs operating on a single server at speeds that are generally faster than those obtained when the OSs were operating solo in a 1:1 configuration.
C. Universal Model Facility
The UMF is another new component responsible for aggregating and associating management models and monitoring data from managed devices. Managed devices are either compute servers, virtual machines or SMI-S-enabled storage servers. The UMF collects and records health information in the context of the relationships that exist between managed devices. By consuming status events, applying hysteresis thresholds to monitored devices and exporting a summary view of vital-signs metrics to the Orchestrator, the UMF could be considered the nervous system wired to the Orchestrator's brain. A monitored variable may go above and dip below thresholds, but isn't considered noteworthy until it has stayed above a threshold for a certain period of time.
D. Image Creation
An image-creation server is a special kind of compute server dedicated to the creation and installation of virtual machines. In large environments that depend on frequent virtual machine creation, you might have multiple image creation servers. In other scenarios, the Orchestration server may decide to define and install a virtual machine "in-place" effectively incubating the virtual machine on the compute server that will eventually also execute it. The result of providing image-creation services is the automated control and creation of a new virtual machine comprising control information, OS image and optional external storage references. Infant virtual machines are ready to execute. They actually run as a result of Orchestrator-driven deployment to an assigned compute server.
Virtualization eliminates physically imposed static boundaries: CPU, memory and disk are allocated dynamically. Services and data gain mobility: the freedom to optimally consume physical resources and the ability to rapidly switch to alternate physical resources while adapting to workload demands. High availability is a natural consequence of virtualized systems.
E. Image Repository
An image-repository server is another special kind of compute server that stores ready-to-run virtual machines. When the Orchestrator instructs a compute server to run a particular virtual machine, the compute server contacts the image repository and downloads the corresponding image. Pushing is an alternative to this pull style of image deployment. For some workloads, it may be optimal for the Orchestrator to instruct the image repository to multicast an image to multiple compute servers to prestage the VM on potential deployment targets. The image repository also provides version control for virtual machines under management to support, for example, offline patching and preproduction testing prior to production staging and rollout, with assured rollback to version-tagged golden images.
Novell, having recognized the shift toward commodity data center architectures based on Intel architecture servers, storage networking, virtualization, automation for resource management and an underlying context of identity-based orchestration is making investments for customers that are consolidating their data centers. The unique Novell approach, linking virtualized storage, virtual machines, resource management, identity management and Services Oriented Architecture (SOA) applications, puts Novell into a leading position in data center automation. Watch for more developments from Novell in the future, capitalizing on the virtue of the virtual approach.