Novell Home

Novell Cool Solutions

rawipfel

Contact rawipfel
Member since 6/20/2008

Bio

No author bio information

User Points

0 points earned on legacy (former) Cool Solutions site
0 points earned on this site

Author Archives

DMTF Creates Open Standard for System Virtualization Management

rawipfel

November 27, 2007 8:46 pm

Reads:4,119

Comments:0

Hi,

As a member of the Distributed Management Task Force (DMTF), Novell has been participating in a working group that has defined an open standard for Virtualization Management. Details were given in a DMTF press release today, and a companion whitepaper.

“Novell strongly believes that open standards are essential for promoting, as well as easing the adoption of virtualization,” said Eric Anderson, vice president of engineering for Systems and Resource Management at Novell. “We are committed to building these open standards into our virtualization management products and our contribution to the SVPC working group has resulted in a model that supports a centralized definition of virtual machines and remote deployment for lifecycle management. In a virtualized environment, the DMTF system virtualization standard delivers a complete view of the resources that need to be managed and is a crucial foundation of the service-oriented, next generation data center.”

Novell and other industry partners helped create an open source implementation of the the System Virtualization standard for Linux, and Novell ships this implementation with SUSE Linux Enterprise Server 10. Novell’s ZENworks VM Builder, VM Warehouse and VM Orchestrator products are designed to manage Virtualization technologies that support the same standard. ZENworks VM Builder is a DMTF Common Information Model (CIM) based service that provides automation for building VMs to a specification derived from the System Virtualization standard, and ZENworks VM Warehouse is a CIM based service that provides version control for both Virtual Machine configuration settings and operating system image files. ZENworks Orchestrator deploys VMs to production servers based on a declarative specification of VM requirements encoded also in CIM format, that allows deterministic matching of service-level (VM) requirements with available infrastructure capabilities – Orchestrator deploys VMs to capable hardware based on a variety of extensible constraints such as CPU type, memory, network or storage accessibility, or even availability considerations such as clustered servers required for production versus single servers used for testing or maintenance.

The following screenshot is of the ZENworks Orchestrator console showing details for a Virtual Machine named ApacheVM. The VM Files dialog is open and lists the collection of CIM encoded Resource Allocation Setting Data (RASD) files for the VM, including for example, rasdmof_proc1, which is the RASD that defines this Virtual Machine’s processor requirement.

ZENworksAndSVPC-V

We will be sharing more details of Novell’s Open Source Standards based Management approaches for Virtual Machine, Server, Cluster and Storage infrastructure at the upcoming Management Developers Conference. Please let us know by replying to this blog, if this is a topic you would like to read more about as these standards evolve.

Cheers,
Robert

+read more

Standards-based open source Storage Resource Management for FibreChannel SANs

rawipfel

October 29, 2007 9:51 am

Reads:5,804

Comments:0

Novell, as a member of both the Storage Networking Industry Association (SNIA), and the Eclipse foundation, has been participating in an open source Storage Resource Management (SRM) project called Aperi

Goals of Aperi are stated at the project page – Aperi is a vendor-neutral, open, storage management framework designed to cultivate both an open-source community and an ecosystem for complementary products, capabilities, and services around the framework to promote greater consumer choice and foster competition.

Aperi discovers and manages storage network devices (arrays, swiches and HBAs) and relationships with host servers using a variety of protocols and standards, including SNMP and the Storage Management Initiative Specification (SMI-S). SMI-S is based on the Distributed Management Task Force (DMTF) Common Information Model (CIM) and defines a number of now standard profiles for storage management.

SNIA has published a series of excellent tutorials on Storage Resource Management and the SMI Specification.

As an Eclipse project committer, I’ve been spending a bit of my spare time contributing to the Aperi project – first to get the Eclipse based Linux development environment and build working for SUSE Linux Enterprise 10, and most recently, adding code to support Xen-based Virtual Machines:

AperiXenDisk

The upcoming 0.4 release of Aperi will support SUSE Linux Enterprise 10 SP1 and therefore layered Novell products such as Open Enterprise Server 2 and ZENworks Orchestrator based Virtual Machine Manager, which provide Xen as the platform Hypervisor for running guest Virtual Machines.

The Aperi project is a great open source Storage Resource Management tool, for many Fibre Channel SAN-based storage solutions including High Availability Clusters and Virtual Machine Grids.

You can download source code and binaries here and the Aperi project welcomes additional developers, maybe there is a feature you would like to see added, please us know or join the project via the aperi-dev mailing list.

Regards,
Robert

+read more

Virtually there, at LinuxWorld

rawipfel

August 8, 2007 2:31 am

Reads:3,828

Comments:5

It’s been a year since our last SUSE Linux Enterprise 10 (SLE10) virtualization demo at LinuxWorld – a high availability cluster of four SLES10 servers, hosting Xen virtual machines as cluster-managed failover resources, with virtual machine OS images stored in Oracle’s cluster file system on shared iSCSI storage, integrated with Heartbeat2 for cluster management. One year later, I’d like to take the occasion of Novell’s SLE10 launch anniversary, to blog about some of the recent advances in Linux virtualization and management automation – today’s announcement of ZENworks Orchestrator 1.1.

SLES10 sp1

Service pack 1 for SLE10 was recently released, with many improvements due to customer feedback since last year, plus a number of new features, including, for example, support for iSNS, the Internet Storage (iSCSI) Name Service. iSNS simplifies network storage assignment especially in a virtual machine environment where each VM is an iSCSI initiator (has an identity and credentials) in its own right. SLES10 includes updates for iSCSI initiator and target, and the integrated iSCSI target provides a great SAN storage server for VMs and high availability clusters, especially when running the target on modern multicore CPUs with multiple bonded 1/10 Gigabit Ethernet network interfaces. SLES10’s integrated High Availability Storage Infrastructure also improves its support for virtual machine availability, with live migration enabled for VMs managed as cluster resources on shared storage.

SLES10 sp1 incorporates many core virtualization advances including an updated Hypervisor, para-virtualized NetWare and fully-virtualized Windows guest support, para-virtualized drivers for improved disk and LAN I/O performance, and new YaST tools for creating and managing the lifecycle of virtual machines. New open source Common Information Model (CIM) providers implement the DMTF System Virtualization Partitioning and Cluster (SVPC-V) working group’s profiles for (Xen-based) Virtual System Management. This standards-based API plus new command line tools were designed to enable one-to-many automation of virtual machine creation, deployment, monitoring and management for distributed virtual machine host and network storage servers.

Thus, SLES10 sp1 provides the necessary universal Linux foundation for Novell’s just-released ZENworks Data Center Automation products and Enterprise Workgroup Services soon to be released as Open Enterprise Server 2.

ZENworks Data Center Automation

ZENworks Orchestrator 1.1 is a grid-based distributed resource automation system, for physical and virtual machines, that supports full virtual machine life cycle management across networks of physical VM hosts. It completely automates the process of creating and managing virtual machines, with centralized version control and distributed storage repository management for VM images, together with constraint-based adaptive deployment of VMs to capability-matched physical servers, with integrated P&V performance monitoring. Administrators can create and test VMs, patch and update under version control, and by designating a gold master version, schedule automated deployment to suitable and available production servers.

The ZENworks Orchestrator schedules work to managed servers in the form of compiled Python jobs. Jobs are units of work that are assigned to servers by a realtime resource scheduler that continuously evaluates available resources versus pending requests. Physical servers, like virtual machines, are considered to be resources that advertize their capabilities in the form of facts that describe the type and capacity of resource. For a physical server, example facts might include number and type of CPUs, memory, and direct-attached storage capacity. Static facts are attributes of a resource that don’t change, an example might be a server with VT-capable CPUs. Dynamic facts can change over time, perhaps due to physical hardware hotplug or memory ballooning of a virtual machine. Computed facts are calculated, by the scheduler, when referenced in a job control policy. As an example, consider deploying a virtual machine into your data center production server pool. The VM requires two VT-x enabled CPUs, 512 MB of direct attached OS image storage, 1 GB of memory, Gigabit Ethernet connectivity and access to a Fibre Channel SAN. These requirements are expressed as a set of deployment constraints – references to facts which are matched to available resources by the Orchestrator, when scheduling the VM for deployment. Sophisticated resource allocation is made possible when combining a number of constraints into policy statements that are applied to groups of resources; matching supply with demand. The deployment of a virtual machine to a physical server is therefore unified by a general purpose (grid-based) algorithm for assigning units of work to available resources, in a manner that’s respectful of competing work and shared capacity.

ZENworks VM Builder provides automation for creating and installing an OS into a VM. It accepts the definition of a VM, formatted according to the SVPC-V model for Virtual System Management, together with an operating system specification; OS type, installation source and response file. A number of different operating system types are supported, including SLES, NetWare, Open Enterprise Server, RedHat and Windows. The builder creates instances of CIM Job to manage each outstanding build request, but submits them to the Orchestrator for processing. The Orchestrator, by using resource capability (fact) matching, schedules the build job to an appropriate server. The administrator may configure a separate pool of VM Builder servers dedicated to the purpose of creating VMs to order. It’s also possible to configure the VM Builder to borrow cycles from public test or even production servers.

ZENworks VM Warehouse is a centralized repository for virtual machine definitions and OS images. VM definitions are stored in a format that extends the standard SVPC-V model for Virtual System Management, and allows for version control of VM definitions. The VM Warehouse also manages image files that are associated (CIM terminology) with virtual machines. The CIM-based model has support for VM personality (forms of identity) that overlay OS images when deployed, allowing for multiple VMs to share the same base image file. Change control and patching thus scales with the number of common VMs sharing the same image but with each VM providing unique personalization. Upgrading a common OS image is done once, and creates a new version of that image. Rolling out an upgrade to all dependent VMs can be scheduled by the Orchestrator, which uses a scalable and secure multicast-based file distribution protocol to update production servers. Rollback to a previous gold-master image is virtually instantaneous, in case an upgraded VM experiences problems in production. By managing VM definitions, OS image file associations and VM personality, the VM Warehouse (and its resource model) supports VMs as first-class managed IT assets.

ZENworks P&V Monitoring extends an open source High Performance Cluster Computing monitoring package called Ganglia. Novell actively participates in the Ganglia project and has contributed code for extensible Python-based probes. In a virtualized data center, with service oriented workload mobility due to virtual machines and storage area network connectivity, it’s becoming increasingly important to correlate the relationships that now exist between physical and virtual machine performance metrics. When virtual machines migrate from one physical server to another, overall service level performance (and ultimately availability) can be affected by other activities related to the physical infrastructure involved. ZENworks P&V Monitoring provides tools to capture and chart the performance metrics of virtual versus physical machines.

If you are physically attending LinuxWorld this week, we invite you to visit Novell’s booth where we’re showing Novell’s latest standards-based data center virtualization and heterogeneous management automation products.

+read more

The future of (Network) Operating Systems…

rawipfel

December 21, 2006 10:25 am

Reads:6,266

Comments:1

Hi,

I’ve been working on clusters (generally, parallel processing) for quite a long time now, and that has created a perspective on service (cluster resource) oriented modeling of distributed applications. I wanted to share a few thoughts on the future of Network Operating Systems; from this point of view…

We’ve long considered (and have been training) server-hosted applications to be relocatable cluster resources, thanks to data persistence in the storage network, and secondary network addresses, services have already been somewhat virtualized and are relocatable across the physical servers in a high-availability cluster. Service-oriented location transparency is a nice way to describe this. Two generations of Fibre-Channel and now the rise of commodity iSCSI Storage Area Networks has helped our thinking. And as Brad has been writing, we’ve reached a level of sophistication that allows for automated service-level disaster recovery – cluster resources that can failover from one data center cluster to another – Netware or Linux – together with your most important asset – the persistent data that those cluster resources depend on and serve access to – whether it be file, mail or other valuable data.

I invite you to read Brad’s posts on Novell’s latest Business Continuance Cluster release.

Now – something _really_ interesting is happening – as a PC-server based industry – we are taking the next step to make compute virtualization become an intrinsic component of the network operating system. The integration of clusters, storage networking, virtual machines plus some ideas, protocols and algorithms from the high performance cluster computing (aka parallel processing aka Grid) community is causing a convergence and opportunity to think about what network operating systems actually mean and will be able to offer in the future. And Novell’s work with open standards, in open source, creates the foundation for practical enterprise interopability.

Distributed identity and trust creates the security foundation.

Virtual machines – are the missing link. The active/active versus active/passive cluster resource deployment construct we’ve been working with the last few years, becomes a much more robust tool for managing services that are entirely self-contained, thanks to compute virtualization. We call this a proper separation of concerns. By wrapping a traditional cluster resource (network consumable service) inside a quality-of-service controlled virtual machine, we can dynamically provision the performance and availability of that service relative to available physical resources. Configuration is separate and follows the service. Code follows the service. And individual services gain even more mobility than our cluster (constrained) resources – via resource management software that can orchestrate the combination of application service and data, at service instantiation – i.e. deployment – rather than installation time.

Clusters of failover resources, and now virtual machines, and clusters-of-clusters for automated geographic disaster recovery, and grids of virtual machines hosted by high-availability clusters, are the things we’ve spent the last few years thinking about – for Netware and Linux. And we are delivering these capabilities via products like OES, SLES10, ZENworks Orchestrator, BCC 1.1, and soon, SLES10-based OES2.

If you have time this holiday season – I’ve written a few more words on these topics – and invite your feedback, and thoughts, on the future of the Network Operating System.

https://www.novell.com/connectionmagazine/2006/q4/tech_talk_9.html

Here at Novell, the company that offers Software for the Open Enterprise, we are certainly looking forward to 2007. A lot of good stuff is coming together…

Best wishes for a safe and happy holiday season. See you at Brainshare 2007 :-)

Robert

+read more

Virtual reality

rawipfel

October 28, 2006 2:27 pm

Reads:2,941

Comments:0

This kind of thing doesn’t happen (to me) very often, so I figured I’d write about it here, before it fades away as a distant memory. I flew back to the UK on Monday, arrived Tuesday, and headed into the City of London for some customer visits. It was awesome to back in London, I haven’t really spent any time there since the daily commute into the Docklands almost 20 years ago. And woah, how Docklands has changed… what was once the home of “Eastenders” is now a stainless steel metroplex, with 30th floor wine bars, underground shopping and buildings intersecting light railway and tube lines. Approximately 23 hours after arriving in Heathrow, I was headed back to Utah. The only issue was a lack of laptop power on the airplane, and generally in airports, where I met John Mad Dog Hall when sharing a power outlet in order to get a boost before boarding one flight. He was running SuSE 10.1 on his laptop. And my Thinkpad T42p runs about 4 hours when playing with SLES10, Xen, LinuxHA (single-node cluster), and the Python WBEM client bindings to the DMTF CIM profiles for virtualization.

+read more

Aperi for Clusters

rawipfel

October 16, 2006 9:10 pm

Reads:3,079

Comments:0

Hi,

Some recent news: Novell joined Aperi – http://www.eclipse.org/aperi/

From http://wiki.eclipse.org/index.php/Aperi_Storage_Management_Project

“Aperi is a vendor-neutral, open, storage management framework designed to cultivate both an open-source community and an ecosystem for complementary products, capabilities, and services around the framework to promote greater consumer choice and foster competition.”

We think it’s beneficial for server software to know a bit more about storage (network) resources especially for servers configured in clusters, and for clusters hosting virtual machines as cluster resources, with dependencies on SAN logical units.

For example, would it be useful to you, to understand the relationship between your cluster and its SAN fabric, and logical units; perhaps querying the cluster to report SAN device specific information corresponding to a specific cluster resource? Would it be useful to be able to diagram the SAN topology, automatically, from your cluster? Do you have ideas for improving the integration between clusters and SAN hardware?

Thanks, Robert

+read more

LinuxWorld demo

rawipfel

September 14, 2006 5:44 pm

Reads:3,436

Comments:3

Hi,

Thought it might be interesting to share some details of a cluster I helped build and demo for the Novell SLE 10 launch at LinuxWorld.

A couple of commenters to Jeff Jaffe’s blog, pointed out a potential problem with running many virtual machines on one physical server, creates a much larger outage should the server fail; you lose all those VMs. We agree and designed a solution into SLES10.
With SLES10, you can cluster physical servers and failover VMs from one to another; using traditional cluster resources to manage each VM. SLES10 supports clustering of Xen VMs and the following slides illustrate the Linux World demo – a four node cluster sharing storage over iSCSI, and running two virtual machines as relocatable cluster resoures. The VM OS images are accessible to all nodes thanks to the Oracle cluster file system, and the cluster software monitors the virtual machines to enable local restart and failover between nodes.
https://wiki.novell.com/index.php/Image:LinuxWorld06HASFDemo.pdf

Robert

+read more

New Novell Cluster Services book

rawipfel

July 11, 2006 11:11 pm

Reads:3,010

Comments:3

Hi,

I recently received and am now the proud owner of a new Novell Press book: “Novell Cluster Services for NetWare and Linux”.

http://www.amazon.co.uk/gp/product/0672328453/026-8544332-6987645?v=glance&n=266239

Rob and Sander have done a great job and I can certainly recommend their book to everyone interested in the latest version of Novell Cluster Services for Open Enterprise Server NetWare and Linux. The iSCSI and Cluster Upgrade chapters contain lots of good details, and there’s also coverage of Business Continuity Clustering. Thanks a lot Rob and Sander.

Cheers,

Robert

 

+read more

Novell Cluster Services

rawipfel

June 14, 2006 5:33 pm

Reads:4,371

Comments:3

Hi everyone,
Hopefully those of you that own clusters appreciate the value of “no news is good news”
Working clusters don’t create news. By design; services are available, users are content, administrators enjoy uptime (and their freetime), and life is brilliant.
And so is the life of a cluster software developer; no news (uhm, blogs) means good news :-) but we’re all working hard laying interesting foundations for our highly available futures…
So here’s some recent news:
BCC 1.1 is in beta – the next version of Novell’s Business Continuance Cluster product for NetWare and now also OES Linux, with a number of great new features including health based auto-site failover, support for mixed (NetWare and Linux) BCC’s, and better integration with (CIM/SMI-S based) storage hardware.
(We think SMI-S is an important standard for storage management; and there will be more SMI-S related news in the future).
See https://www.novell.com/coolsolutions/feature/17291.html for more details and how to join the BCC 1.1 beta program.
From the NCS development lab – the 1.8.2 cluster codebase is now running natively on 64-bit (Linux) servers. We’ve got mixed 32-bit / 64-bit clusters working too, and are interested in feedback on how you plan (or might expect us to support) 32-bit to 64-bit migrations.
We’ve been experimenting with Serial Attached SCSI (SAS). Imho, it’s a really interesting technology for building share-disk clusters. Check out SNIA’s tutorial: SAS_and_SATA_Revolutionizing_Storage_Architectures.pdf
If anyone is interested in cluster resource health monitoring and automated recovery, we’d love to hear from, we have some stuff running in the lab that leverages the Python process monitors built into ncs-resourced for NCS Linux.
We’ve also been releasing some of our code into open source. For example, the integration between NCS and EVMS available in OES Linux evolved into some code contributed upstream at evms.sourceforge.net. And we’ve created an EVMS plugin for SLES10’s iSCSI target software. This is kinda cool because it lets you use EVMS tools to create and manage logical (disk) volumes that are automatically exported to iSCSI as whole virtual disks that can be shared by your clustered servers running either OES NetWare or Linux.
Bye for now, Robert

+read more

OES NetWare and Linux Mixed-OS Clusters

rawipfel

March 19, 2006 8:24 am

Reads:5,233

Comments:4

Hi everyone,This is a brilliant place and time to be; wanted to share some of the cool things happening at Brainshare06.
My name is Robert Wipfel and I work on Clustering for Novell. The last couple of years have been tremendously exciting for engineering. We’ll be demonstrating the latest version of Novell Cluster Services (NCS) in Open Enteprise Server support pack 2. Stop by the technology lab to see a five node mixed NetWare / Linux cluster. Here is a screenshot of another big cluster from our engineering test lab:
Sixteen Node Mixed OES Cluster
The engineering team will be there and we’d love to meet you to learn more about your OES Cluster experiences and expectations for the future. If you’d like to learn more about the future of OES Clustering, we’ll be detailing the roadmap during session IO216 – Novell Open Enterprise Server Cluster and Disaster Recovery Roadmap and Futures.
There are many other great OES Cluster sessions too, see Jason Taylor’s post for a listing. In particular, Rob Bastiaansen and Sander van Vugt’s session TUT220, features a live demo showing all the details of how to migrate an OES NetWare cluster to Linux. Check out their forthcoming new Novell Press book too: Novell Cluster Services for Linux and NetWare.
Looking forward to seeing you at Brainshare!

+read more

RSS

© 2015 Novell