Cool Solutions

Novell Cluster Services


June 14, 2006 5:33 pm





Hi everyone,
Hopefully those of you that own clusters appreciate the value of “no news is good news”
Working clusters don’t create news. By design; services are available, users are content, administrators enjoy uptime (and their freetime), and life is brilliant.
And so is the life of a cluster software developer; no news (uhm, blogs) means good news 🙂 but we’re all working hard laying interesting foundations for our highly available futures…
So here’s some recent news:
BCC 1.1 is in beta – the next version of Novell’s Business Continuance Cluster product for NetWare and now also OES Linux, with a number of great new features including health based auto-site failover, support for mixed (NetWare and Linux) BCC’s, and better integration with (CIM/SMI-S based) storage hardware.
(We think SMI-S is an important standard for storage management; and there will be more SMI-S related news in the future).
See for more details and how to join the BCC 1.1 beta program.
From the NCS development lab – the 1.8.2 cluster codebase is now running natively on 64-bit (Linux) servers. We’ve got mixed 32-bit / 64-bit clusters working too, and are interested in feedback on how you plan (or might expect us to support) 32-bit to 64-bit migrations.
We’ve been experimenting with Serial Attached SCSI (SAS). Imho, it’s a really interesting technology for building share-disk clusters. Check out SNIA’s tutorial: SAS_and_SATA_Revolutionizing_Storage_Architectures.pdf
If anyone is interested in cluster resource health monitoring and automated recovery, we’d love to hear from, we have some stuff running in the lab that leverages the Python process monitors built into ncs-resourced for NCS Linux.
We’ve also been releasing some of our code into open source. For example, the integration between NCS and EVMS available in OES Linux evolved into some code contributed upstream at And we’ve created an EVMS plugin for SLES10’s iSCSI target software. This is kinda cool because it lets you use EVMS tools to create and manage logical (disk) volumes that are automatically exported to iSCSI as whole virtual disks that can be shared by your clustered servers running either OES NetWare or Linux.
Bye for now, Robert

0 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 5 (0 votes, average: 0.00 out of 5)
You need to be a registered member to rate this post.

Categories: Uncategorized


Disclaimer: This content is not supported by Micro Focus. It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test it thoroughly before using it in a production environment.


  1. The article on SAS was an interesting read. Clearly part of a verbal presentation, but still quite good. It really is a cross between SCSI and FC. The lack of zoning will be an issue at first, but they said it’s under consideration for SAS2. Thanks for pointing it out!

  2. By:Henok Ephraim

    Do you see an integration need between the failover capabilities that virtual machines have for hardware failure (For e.g vmware has the HA Cluster for VMWARE) and the application & hardware failover that Clustering software such as Novell Clustering provides? Is the Xen team currently working with the BCS team on a integrated software design?

    What do you think is the future of BCS (Novell Clustering) in lieu of the growing need to consolidate servers using Virtual Servers? In one way the hardware design assumed in a clustering solution of multiple hardware seems to be the opposite direction the virtualized world?

    Any light you can shed on this would be appreciated.

    I hope this topic is not out of context of this blog entry. If so, I deeply appologize.

  3. Hi,

    Yes, there is certainly an opportunity and necessity to integrate h/a clusters and virtual machines. With multiple virtual machines running per physical server, it’s critical to be able to failover virtual machines to alternate physical servers, should a server fail. We’ve been working on this for SLES10, e.g. using Oracle cluster file system (OCFS2) for virtual machine images, and providing a resource agent for Xen in Heartbeat2. The idea is to treat virtual machines as traditional cluster resources (~ relocatable services), that can be assigned to available clustered servers, with dependency on shared storage for e.g. per-VM OS image and persistent data. There’s a short video demo at:
    Select “View the clustering demo” to watch a demo that creates a cluster resource used to manage a Xen VM, plus failover of that VM to an alternate server in the cluster.