Novell Home

Data centers are being squeezed by a variety of internal and external pressures such as power, HVAC, new servers, human errors, patching, asset tracking and more. In fact, the average data center consumes enough power in a month to power 1,000 homes! On top of all this, you have to keep up with dynamically changing business requirements. You need a solution that will allow you to align IT to your business, control costs and minimize risks. Data center managers are looking for a variety of ways to address these dilemmas. One of the key ways is server consolidation using virtualization.

"Data Center Managers are on the hot seat lately. They not only have to cram more servers per square inch than they ever thought they'd need, they also have to figure out how to do it without sending the electricity bill through the roof."

eWeek–
The Greening of the Data Center
Kevin Fogarty
August 2006

For a quick history of virtualization, see the section named An Old Idea Made Better–A Lot Better in the article Virtualization: It's Real. It's Here. It's Now. It's Xen

What is Grid

Electricity supply utilities depend on a high-voltage grid. Therefore, Grid software is nominally considered the foundation for Utility computing. A Grid runs distributed resource management software capable of allocating capacity from virtualized computers and storage devices. Instead of statically installing application software onto a computer, grid software dynamically binds services and data to computers at execution time. This makes individual computers anonymous relative to processing. Grid "jobs" that comprise program logic and dependent data are scheduled to be combined and enable processing.

A good way to think about it is the next generation of resource management software. A better way might be to consider Grid the next distributed operating system that manages applications comprising a collection of Web services consuming virtualized storage and computational resources ensuring optimal use of physical resources relative to consumers. The classic definition of an OS hasn't changed; it too has been virtualized across multiple computers.

"Virtualization in and of itself is interesting, and it gives you server efficiency, but without some of the automated tools, it may actually increase your management burden."

John Enck–
Gartner

> Data Center Automation from Novell
Novell has launched a new strategy to build a mixed-source-based platform that offers value thru sophisticated integration of otherwise isolated components. This solution identifies the workloads shown in (Figure 1.). Consider the evolution of computing from mainframe to mini to client/server. Now modularize, standardize, commoditize and virtualize. Next, add integrated intelligence and you have a modern "computer" comprising virtualized computing and storage that is controlled by a distributed operating system realized by grid-inspired resource management software. This new solution enables mainframe-class capabilities for commodity scale-out data center architectures. All workloads are supported by a common modular Linux foundation: SUSE Linux Enterprise, although all major virtualization platforms will be supported.

Commercial high-performance cluster computing, data center and enterprise workgroup workloads will run inside "virtualized" data centers. (See Figure 2.) Users connect to the network using workstations, whether they are fixed location desktop or mobile devices. Eventually, parts of the desktop software experience will also be hosted and managed by data center servers thru virtualization-enabled provisioning of user machines, onto dynamically repurposed servers, and connected to next-generation thin-client terminals.

"For virtualization to truly work in real-world applications, users must also focus strongly on automation, the policy-based administrative tools used to deploy virtualized instances and manage them."

John Enck–
Gartner

> Components
Novell's first data center automation solution manages compute and storage servers on behalf of applications or services hosted in virtual machines. (Figure 3.) illustrates three primary types of servers running in the new data center.

I. Compute servers
II. Storage servers
III. Management servers:
  A. Orchestrator
  B. Storage Resource Manager
  C. Universal Model Facility
  D. Image Creation
  E. Image Repository


What grids offer is an ease of letting compute power flow to wherever it's needed instead of being statically allocated by the capital spending of particular business units. The enterprise data center is well on its way to becoming a supplier of service rather than a custodian of hardware.

Today's confluence of commodity components, burgeoning bandwidth and open source systems software fills in the rest of the picture. Taken together, they make the enterprise case for grid computing, which is the connection of heterogeneous computing nodes using self-administering software that makes the nodes function as a single virtual system.

Peter Coffee–
Grid Computing in the Enterprise
February 2004

There are five main management server functions; all functions could be installed on a single physical server, in separate virtual machines, or separate servers. Management servers will be clustered for high availability. The resulting management cluster is responsible for orchestrating compute and storage servers with respect to allocatable units of application-specific memory, compute and storage capacity declared by individual virtual machine instantiation and deployment constraints.

Compute servers are industry-standard (rack-mount and blade) servers with multi-core 64-bit CPUs, multi-GB memory, serial-attached RAID, Ethernet and SAN ports, plus embedded hardware that supports out-of-band intelligent platform management interface (IPMI).

I. Compute Servers
Compute servers are industry-standard (rack-mount and blade) servers with multi-core 64bit CPUs, multi-GB memory, serialattached RAID, Ethernet and SAN ports, plus embedded hardware that supports out-of-band intelligent platform management interface (IPMI). Next generation CPUs will provide hardware support to improve upon today's software-based server virtualization. Compute servers run an appropriate OS for the physical hardware architecture, comprising of a virtual machine monitor (such as Xen hypervisor), device drivers, management kernel and agents. Management agents support remote deployment of virtual machines to be executed by the hypervisor also present on every compute server. Compute servers may be grouped together and organized by type (for example, thin blades versus thick SMPs), intended purpose (for example, test or production), owner, physical location and other classification. They are named with a globally unique identifier. Finally, compute servers can function in isolation, or they can cooperate with other compute servers to create high-availability clusters.


II. Storage Servers
Storage servers are industry-standard SAN disk-block storage arrays or file servers. Storage is pooled and protected. Storage is accessed by compute servers on behalf of virtual machines. This is a dynamic relationship; storage is managed with respect to the life-cycle of individual virtual machines. Just like compute servers, storage is organized by type (for example, available RAID5 disks), purpose (for example, temporary, protected or remotely replicated), and owner.

Storage servers are industry-standard SAN disk-block storage arrays or file servers. Storage is pooled and protected.

A configuration management server extends an existing Novell product that uses policy-driven automation to deploy, manage and maintain data center servers.

Many customers have already made a storage infrastructure commitment and want Data Center Automation tools to support that investment. Industry standard SMI-S* enables third-party management of heterogeneous storage. The system will manage whatever enterprise storage has been assigned to it, for example, a portion of an existing SAN or the entire SAN if dedicated to compute and storage orchestration.

(*Note: According to Wikipedia.com, SMI-S, or the Storage Management Initiative - Specification, is a storage standard developed and maintained by the Storage Networking Industry Association (SNIA) and is a model, or guide to building systems using modules that plug together. SMI-S-compliant storage modules interoperate in a system and function in consistent, predictable ways, regardless of which vendor built them, provided that the modules use Common Information Model (CIM) language and adhere to sets of specifications called CIM schema. The main objective of SMI-S is to enable broad interoperability among heterogeneous storage vendor systems.)

III. Management Servers
A configuration management server extends an existing Novell product that uses policy-driven automation to deploy, manage and maintain data center servers. The management server provides centralized control of the life cycle of operating systems with imaging, remote control, inventory and software management. With respect to data center automation, it provides imaging of physical systems onto compute servers plus a global namespace (hardware asset inventory) of all managed compute servers.

This namespace plus any hierarchical structure created by the data center administrator, for example, organizing servers into groups, will be federated with the Universal Model Facility (UMF; also see Universal Model Facility below) to support CIM-based server health monitoring. The management server can track the creation of virtual machines, assuming installation of a virtual machine image that contains the management agents. The management server considers virtual machines to be managed assets in the same way as physical servers. Virtual machines, once created, will also appear in the man-aged server namespace. Once the systems are in a managed state, you need a way to orchestrate them to align to business needs.

What is Utility Computing?

The word Utility connotes an always-available resource much like that sold by water, gas or electricity supply companies. These utility companies charge consumers for what is used and when it is used. They also offer a guaranteed service level. Consumers have become critically dependent on utilities.

Consumers of information technology desire a Utility model for computing. It's no longer possible for society to function without IT and because demand for capacity is sporadic and unpredictable, consumers want to pay as they go and be guaranteed service when they ask. On-demand is therefore only one attribute of the broader Utility Computing concept.

Virtualized systems do nothing by themselves. They have a latent potential to compute and store data in a very dynamic way, but do nothing unless directed. Virtualized systems are the willing subordinates of demanding consumers. Utility computing is therefore realized through the combination of virtualized systems and sophisticated resource management software. Resource management, by executing policy, is the driving force directing virtualized systems in support of line-of-business applications and processes.

In response to variable workload demand, resource management automates tasks such as creating a virtual machine and assigning it to a physical machine or allocating more storage to an authorized service. And life cycle rules cause resources to be automatically retired when no longer needed. To offer a true Utility model for computing, resource management must also react to unexpected events. Response to server failure or spikes in demand for capacity should not require human intervention. Virtualized systems are therefore required to offer standard mechanisms for introspection, or the ability to monitor and report their own health. Autonomic computing is automated response to monitored health conditions and so therefore also realized by the combination of virtualized systems and (policy-based) resource management.

A. Orchestrator
The Orchestrator is the brains behind the data center automation system; it interacts with the configuration and storage resource management servers to manage physical compute and storage resources and the relationships between them. The Orchestrator also manages virtual resources. It's responsible for the entire life cycle of individual virtual machines comprising control information, OS image and storage resources from initial creation to deployment and monitored execution, to final destruction. Physical constraints, dependencies, live performance trends and other real-time execution states monitored by the UMF are considered by the Orchestrator when scheduling virtual machines to compute servers for execution.

B. Storage Resource Manager
The storage resource manager component is responsible for managing SMI-S-enabled storage arrays. The manager is an automounter for SAN LUNs. Compute servers will dynamically access SAN storage with respect to the virtual machines that are scheduled to run on them. The manager also supports provisioning of SAN LUNs when creating a new virtual machine.

SUSE Linux Enterprise 10 offers virtualization capabilities like no other OS. It can provision, deprovision, install, monitor and manage multiple guest operating systems. It provides the out-of-the-box ability to create Xen virtual machines running modified, highly tuned, paravirtualized guest operating systems for optimal performance. What's more, SUSE Linux Enterprise Server can play host to several guest OSs operating on a single server at speeds that are generally faster than those obtained when the OSs were operating solo in a 1:1 configuration.

C. Universal Model Facility
The UMF is another new component responsible for aggregating and associating management models and monitoring data from managed devices. Managed devices are either compute servers, virtual machines or SMI-S-enabled storage servers. The UMF collects and records health information in the context of the relationships that exist between managed devices. By consuming status events, applying hysteresis thresholds to monitored devices and exporting a summary view of vital-signs metrics to the Orchestrator, the UMF could be considered the nervous system wired to the Orchestrator's brain. A monitored variable may go above and dip below thresholds, but isn't considered noteworthy until it has stayed above a threshold for a certain period of time.

SUSE Linux Enterprise Server 10 – virtuously virtual

SUSE Linux Enterprise 10 offers virtualization capabilities like no other OS. It can provision, deprovision, install, monitor and manage multiple guest operating systems. It provides the out-of-the-box ability to create Xen virtual machines running modified, highly tuned, paravirtualized guest operating systems for optimal performance. What's more, with the CPU hardware assist plus Xen functionality, SUSE Linux Enterprise Server can play host to several guest OSs operating on a single server at speeds that are generally faster than those obtained when the OSs were operating solo in a 1:1 configuration.

Data center managers can maintain a centralized store of virtual machines (VMs) and deploy them over the network by identifying a physical computer at deployment time, copying the VM image, and making it available to run on that particular physical server. The VM can specify a set of constraints such as 32-bit or 64-bit server or SAN connectivity. The VM might contain a Windows OS version or a legacy OS and can even specify that the hardware must support virtualization technology. The data center can maintain a veritable catalog of available VMs in an offline repository and send images upon the request of an individual, a workgroup or–soon–autonomically, when a business policy, a service level agreement or a server failure necessitates dispensing a new image.

In addition to virtualization capabilities, SUSE Linux Enterprise Server 10 supports the Oracle Clustered File System (OCFS), and therefore provides outstanding support for clustering. What's more, in a clustered environment, SUSE Linux Enterprise Server 10 (plus the Xen hypervisor, YaST2, CIM-based monitoring tools and other built-in, standards-based management solutions) is the foundation for allowing resources to be pooled, allocated and utilized like never before. In effect VM management becomes synonymous with workload management. The data center becomes an asset manager that is aware of all physical and virtual servers in the environment and their characteristics. This information is acted upon in real time to allocate resources as appropriately and efficiently as possible.

Data center managers can configure a clustered environment based on standardized platform and running SUSE Linux Enterprise 10 that features centralized, shared storage and is free of single points of failure. (See Figure 4.) This design enables high availability for VM hosting, as all VM OS image files reside in a central location and access is possible by each server. VMs can be failed over if the physical server on which they're running fails. With future support for live VM state migration, or a real-time transfer of a live OS state from one physical server to another, there is virtually no server downtime; applications continue to operate uninterrupted, and end users are unaware that a migration even took place.

D. Image Creation
An image-creation server is a special kind of compute server dedicated to the creation and installation of virtual machines. In large environments that depend on frequent virtual machine creation, you might have multiple image creation servers. In other scenarios, the Orchestration server may decide to define and install a virtual machine "in-place" effectively incubating the virtual machine on the compute server that will eventually also execute it. The result of providing image-creation services is the automated control and creation of a new virtual machine comprising control information, OS image and optional external storage references. Infant virtual machines are ready to execute. They actually run as a result of Orchestrator-driven deployment to an assigned compute server.

Virtualization eliminates physically imposed static boundaries: CPU, memory and disk are allocated dynamically. Services and data gain mobility: the freedom to optimally consume physical resources and the ability to rapidly switch to alternate physical resources while adapting to workload demands. High availability is a natural consequence of virtualized systems.

E. Image Repository
An image-repository server is another special kind of compute server that stores ready-to-run virtual machines. When the Orchestrator instructs a compute server to run a particular virtual machine, the compute server contacts the image repository and downloads the corresponding image. Pushing is an alternative to this pull style of image deployment. For some workloads, it may be optimal for the Orchestrator to instruct the image repository to multicast an image to multiple compute servers to prestage the VM on potential deployment targets. The image repository also provides version control for virtual machines under management to support, for example, offline patching and preproduction testing prior to production staging and rollout, with assured rollback to version-tagged golden images.

> Summary
Novell, having recognized the shift toward commodity data center architectures based on Intel architecture servers, storage networking, virtualization, automation for resource management and an underlying context of identity-based orchestration is making investments for customers that are consolidating their data centers. The unique Novell approach, linking virtualized storage, virtual machines, resource management, identity management and Services Oriented Architecture (SOA) applications, puts Novell into a leading position in data center automation. Watch for more developments from Novell in the future, capitalizing on the virtue of the virtual approach. red N

What is Virtualization?

The classic computer has CPUs, memory and disk(s) to hold data when the power is turned off. Virtual memory gave computers the ability to present the illusion to applications of more main memory than was physically available. Virtual disks create the illusion of a disk larger or more fault tolerant compared to the many physical disks they comprise. Virtual machines present the illusion of a whole computer that is actually contained by a real computer sharing its physical resources among competing virtual machines. Clusters present the illusion of a single reliable computer by coupling together and masking the failures of physical computers.

Today, data center computers (servers) are connected to disks over a storage area network (SAN). By removing and relocating storage from individual servers to a central network location, server form factors have shrunk. Blade servers are now popular. Blades are granted access to virtual disks (named storage containers) located inside SAN disk arrays. When a server fails, processing fails over to another server with access to the same SAN virtual disks. When a service (running on a server) runs out of storage, more space can be allocated from the SAN using standard management APIs. When services themselves are virtualized, by hosting inside a virtual machine, they gain the flexibility to migrate from one physical server to another.

Virtualization eliminates physically imposed static boundaries: CPU, memory and disk are allocated dynamically. Services and data gain mobility: the freedom to optimally consume physical resources and the ability to rapidly switch to alternate physical resources while adapting to workload demands. High availability is a natural consequence of virtualized systems.

Legacy line of business applications are also being virtualized. Static monolithic client server software is being augmented or replaced with Web services. Webbased Services Oriented Architecture (SOA) replaces earlier distributed object systems. There are new WS- protocols for anything that wasn't XML-based before. And line-of-business (LOB) applications now comprise a number of cooperating services. Infrastructure services provide naming, discovery and, via XML, a data integration and exchange format. LOB components execute in virtual machines and communicate using Web services protocols. SOA and WSprotocols are creating a new platform for distributed computing.

Finally, with so many distributed moving parts, identity management creates the infrastructure necessary to securely name and associate, authenticate and authorize service consumers with producers regardless of service type. Identity is the context that binds a flow of service requests all the way from the end user through multiple processing tiers, to data on disks. Users are granted rights to services and services are granted rights to other services. And if we haven't experienced enough virtualization yet, identity itself has been virtualized by the notion of "role."



© 2014 Novell