If you use shared devices with multipath I/O capability, ensure that your setup meets the requirements in this section.
When you use Device Mapper Multipath (DM-MP) with Novell Cluster Services, ensure that you set the path failover settings so that the paths fail when path I/O errors occur.
The default setting in DM-MP is to queue I/O if one or more HBA paths is lost. Novell Cluster Services does not migrate resources from a node set to the Queue mode because of data corruption issues that can be caused by double mounts if the HBA path is recovered before a reboot.
IMPORTANT:The HBAs must be set to Failed mode so that Novell Cluster Services can automatically fail over storage resources if a disk paths go down.
Change the Retry setting in the /etc/modprobe.conf.local and /etc/multipath.conf files so that Novell Cluster Services works correctly with DM-MP. For information, see Section 4.9.2, Modifying the Retry Setting for the modprobe.conf.local File and Section 4.9.3, Modifying the Polling Interval, No Path Retry, and Failback Settings in the multipath.conf File.
Also consider changes as needed for the retry settings in the HBA BIOS. For information, see Section 4.9.4, Modifying the Port Down Retry and Link Down Retry Settings for an HBA BIOS.
The port_down_retry setting specifies the number of times to attempt to reconnect to a port if it is down when using multipath I/O in a cluster. Ensure that you have installed the latest HBA drivers from your HBA vendor. Refer to the HBA vendor’s documentation to understand the preferred settings for the device, then make any changes in the /etc/modprobe.conf.local file.
Use the following setting in the /etc/modprobe.conf.local file:
options qla2xxx qlport_down_retry=1
There was a change in the latest kernel and qla-driver that influences the time-out. Without the latest patch, an extra five seconds is automatically added to the port_down_retry variable to determine the time-out value for option 1 (dev_loss_tmo=port_down_retry_count+5), and option 1 is the best choice. In the patch, the extra 5 seconds are no longer added to the port_down_retry variable to determine the time-out value for option 1 (dev_loss_tmo=port_down_retry_count). If you have installed the latest qla-driver, option 2 is the best choice.
For OES 2 SP2 and later, or if you have installed the latest kernel and qla-driver, use the following setting in the /etc/modprobe.conf.local file:
options qla2xxx qlport_down_retry=2
The goal of multipath I/O is to provide connectivity fault tolerance between the storage system and the server. When you configure multipath I/O for a stand-alone server, the retry setting protects the server operating system from receiving I/O errors as long as possible. It queues messages until a multipath failover occurs and provides a healthy connection. However, when connectivity errors occur for a cluster node, you want to report the I/O failure in order to trigger the resource failover instead of waiting for a multipath failover to be resolved. In cluster environments, you must modify the retry setting so that the cluster node receives an I/O error in relation to the cluster SBD verification process (recommended to be 50% of the heartbeat tolerance) if the connection is lost to the storage system.
The polling interval for multipath I/O defines the interval of time in seconds between the end of one path checking cycle and the beginning of the next path checking cycle. The default interval is 5 seconds. An SBD partition has I/O every 4 seconds by default. A multipath check for the SBD partition is more useful if the multipath polling interval value is 4 seconds or less.
IMPORTANT:Ensure that you verify the polling_interval setting with your storage system vendor. Different storage systems can require different settings.
We recommend a retry setting of “fail” or “0” in the /etc/multipath.conf file when working in a cluster. This causes the resources to fail over when the connection is lost to storage. Otherwise, the messages queue and the resource failover cannot occur.
IMPORTANT:Ensure that you verify the retry settings with your storage system vendor. Different storage systems can require different settings.
features "0" no_path_retry fail
The value fail is the same as a setting value of 0.
We recommend failback setting of “manual” for multipath I/O in cluster environments in order to prevent multipath failover ping-pong.
failback "manual"
IMPORTANT:Ensure that you verify the failback setting with your storage system vendor. Different storage systems can require different settings.
For example, the following code shows the default polling_interval, no_path_retry, and failback commands as they appear in the /etc/multipath.conf file for EMC storage:
defaults { polling_interval 5 # no_path_retry 0 user_friendly_names yes features 0 } devices { device { vendor "DGC" product ".*" product_blacklist "LUNZ" path_grouping_policy "group_by_prio" path_checker "emc_clariion" features "0" hardware_handler "1 emc" prio "emc" failback "manual" no_path_retry fail #Set MP for failed I/O mode, any other non-zero values sets the HBAs for Blocked I/O mode } }
For information about configuring the multipath.conf file, see Managing Multipath I/O for Devices
in the SLES 10 SP4 Storage Administration Guide.
In the HBA BIOS, the default settings for the
and values are typically set too high for a cluster environment. For example, there might be a delay of more than 30 seconds after a fault occurs before I/O resumes on the remaining HBAs. Reduce the delay time for the HBA retry so that its timing is compatible with the other timeout settings in your cluster.For example, you can change the
and settings to 5 seconds in the QLogic HBA BIOS:Port Down Retry=5 Link Down Retry=5