B.2 BCC Deployment Considerations

The following sections discuss the various requirements of the BCC VFS management entry points and modules to enhance BCC:

B.2.1 Software

The BCCVFS.NLM module provides the BCC Virtual File System management entry points. This NLM is dependent upon other BCC NLMs. As such, BCCVFS.NLM will auto-load all of the other required BCC-related NLMs.

For more information on installing and setting up BCC components, see the following topics in the BCC product documentation:

B.2.2 Hardware

The hardware requirements for the BCC Virtual File System management entry points are the same as those for the Business Continuity Cluster. For further details, see the Business Continuity Cluster Services requirements.

After installation, BCC becomes an integral part of the NCS product. Therefore, each individual cluster in BCC has the same hardware requirements as a standard NCS cluster. For the current hardware requirements, see the Novell Cluster Services System product page.

The following sections discuss some of the hardware design considerations as you deploy your BCC system:

SAN Requirements

A Storage Area Network (SAN) is required for BCC as a whole. The SAN must support some form of cluster access control through host initiators that connect to the storage target.

For example, in a Fibre Channel SAN, this can be accomplished by primary and secondary logical unit numbers (LUN) and LUN masking. This also can be accomplished by using Novell’s iSCSI product with LDAP Access Control Lists (ACL).

iSCSI is standard for running SCSI block storage protocols over high-speed TCP/IP networks. NetWare 6.5’s support for iSCSI provides the ability to create low-cost storage area networks (SANs) using standard Ethernet hardware. iSCSI for NetWare 6.5 allows you to build a SAN using the same hardware that is used in a traditional LAN. It consists of software that you add to your existing NetWare servers to create a SAN and a NetWare cluster.

The iSCSI “initiator” software is installed and configured on servers in the network used to access shared storage. Initiators can be cluster servers. They use the iSCSI protocol to communicate with an iSCSI storage server or “target” over a TCP/IP network. The “target” software is installed on a NetWare server and provides access to shared disks through the iSCSI protocol. It enables the server to function as a disk controller for the shared disk system. iSCSI is configured and managed through Novell Remote Manager.

To achieve acceptable levels of fault tolerance or business continuity, one SAN is needed for each standard cluster in BCC. Therefore, a minimum of two SANs is required.

For additional details, you might want to reference the following documentation:

Network Requirements

BCC has the following network requirements:

High Speed Storage Area Network (SAN): Use a SAN (for example, Fibre Channel SAN, gigabit ethernet SAN, etc.) for the disk I/O channel to the individual cluster nodes. To achieve acceptable levels of fault tolerance or business continuity, one SAN is needed for each individual cluster in BCC. Therefore, a minimum of two SANs is required.

It is recommended that the SANs in a BCC be of the same model and manufacturer. Furthermore, the SANs should be YES tested and approved. For more information, see YES CERTIFIED Bulletin Search for a list of YES tested and approved hardware.

The selected SAN hardware should also support the Novell SAN Management Interface (NSMI). This is required so the BCC can automate the promotion (or masking) of the LUNs on the secondary device. For more information, see Section B.4, Novell SAN Management Interface.

File Replication: One of two forms of file replication is required between SAN islands.

  • SAN-based replication. This can be synchronous or asynchronous replication at the block level using the SAN hardware. This requires a Fibre Channel or gigabit Ethernet network with a low-bit error between the individual SAN islands. This requirement is necessary to mirror disk I/O between the two distinct SANs in a BCC.

  • Host-based replication. This also can be synchronous or asynchronous. Real-time mirroring requires a Fibre Channel or gigabit Ethernet network with a low-bit error rate between the individual SAN islands. This is necessary to mirror disk I/O between the two distinct SANs in a BCC.

IP Connectivity: Regardless of the number of eDirectory trees used for the BCC, IP connectivity is required between the servers that host the BCC. IP (NCP on port 524) is used between clusters for internal BCC communication. Furthermore, IP is used by DirXML to replicate the various eDirectory objects between the two clusters and trees in a single- or multiple-tree setup.

Virtual IP Addresses (VIPA) (Optional): A dedicated subnet is required for all cluster nodes to use in VIPA. VIPA requires that all virtual IP addresses that are bound to a particular server or cluster node be contained within a unique and unused subnet. If Variable Length Subnet Masks (VLSM) are used, the penalties imposed by this requirement can be reduced. For more information, see Virtual IP Address the Novell TCP/IP Administration Guide for NetWare 6.5.

NOTE:The VIPA subnet is independent and separate from the subnet required for NCS heartbeats. For more information, see Heartbeat Settings in “Cluster Container” on page 36.

VIPA also requires Routing Information Protocol support (RIP-1 or RIP-2) to propagate routing information to routers on the network. The Open Shortest Path First (OSPF) routing protocol is not currently supported by the Novell implementation of VIPA. For more information about RIP, see Novell Documentation: NetWare 6.5 RIP.

NOTE:BCC supports VIPA as an optional network technology. VIPA has many benefits but the requirement for RIP support on the network, as well as the requirement for a dedicated subnet for each cluster, might be too restrictive for some environments.

Network Performance Considerations

After a failover has been initiated, BCC fails over all cluster resources from a presumably destroyed primary cluster to the secondary cluster. Because specific BCC failover times depend upon the network hardware configuration and cluster resources that are running, they are difficult to predict.

For example, consider that a cluster resource takes several minutes to initialize and load. By definition, a BCC failover is not complete until all cluster resources are running and able to take traffic or requests. In this situation, BCC failover takes several minutes to complete because of the slow or startup-intensive cluster resource.

In this case, you can modify the failover priorities of the cluster resources or load the problematic cluster resources on a designated node to minimize this delay potential. Furthermore, if VIPA is being used in a BCC, there is additional time required for network route convergence to occur.

For example, the RIP protocol specifies a 180-second time to live (TTL) value for all route information. During this convergence process, it is possible that some clients cannot connect to the host cluster nodes using the virtual IP address.Finally, there are many factors outside the control of BCC that can affect end-user performance. Two of these factors include:

  • A Resolving Agent (outside of your network): One example is a DNS cache located at some third-party domain on the Internet. The end user might have to wait for DNS information to propagate before being able to resolve cluster resources (such as a highly available Web server). If you face this issue, consider VIPA as a possible solution.

  • SAN Hardware: Making the shared data on the secondary SAN available to the nodes in the secondary cluster is one of the required steps in the BCC failover process. This is done in a Fibre Channel environment by promoting the LUNs on the SAN from secondary to primary. Depending on the hardware, this could take some time to complete. For more information, see Section B.4, Novell SAN Management Interface.

IMPORTANT:Given these parameters, the goal of BCC software is to complete the failover of all cluster resources–from the primary cluster to the secondary cluster–in less than 5 minutes.