Novell is now a part of Micro Focus

My Favorites

Close

Please to see your favorites.

Using LVM on Multipath (DM MPIO) Devices.

This document (7007498) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 12
SUSE Linux Enterprise Server 11
SUSE Linux Enterprise Server 10
SUSE Linux Enterprise Server 9
Logical Volume Management (LVM / LVM2)
Multipath IO (MPIO)

Situation

After implementing multipath (MPIO) on a LUN configured for use with LVM, iostat showed that only one of the active paths was in use. The LVM configuration had not been modified since the original server installation, and the default LVM filter (found in /etc/lvm/lvm.conf) was as follows:

filter = [ "r|/dev/.*/by-path/.*|", "r|/dev/.*/by-id/.*|", "a/.*/" ]

This filter rejects all block devices found in /dev/.*/by-path, and /dev/.*/by-id, and accepts all other block devices. As multipath device nodes are stored in /dev/disk/by-id in SLES11 (and /dev/disk/by-name in SLES9 and 10), these nodes are not scanned using this default configuration. Activating LVM on the /dev/sd* devices bypasses the multipath layer, and can result in a lack of fault tolerance and load balancing.

Messages similar to the following are one indication of an incorrect configuration:

Found duplicate PV pooyRaGTG0cDRqMJ2pesPpwhJdor78xQ: using /dev/sdr not /dev/sdb

Resolution

To resolve this issue, LVM must be configured to scan only the multipath device nodes - and specifically not scan the raw device nodes which represent individual paths to the SAN (/dev/sd*). It is the LVM filter (found in /etc/lvm/lvm.conf), that determines which devices should be scanned.

Before determining the best LVM filter for a specific configuration, confirm the location and naming convention for the MPIO device nodes:

# multipath -l
36006016088d014007e0d0d2213ecdf11 dm-1 DGC,RAID 5
size=50G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
|-+- policy='round-robin 0' prio=-1 status=active
| `- 5:0:1:0 sdd 8:48 active undef  running
`-+- policy='round-robin 0' prio=-1 status=enabled
  `- 5:0:0:0 sdc 8:32 active undef  running

The `multipath -l` output provides the name (in this case, the WWID) of the LUN. This name is used in the creation of the device nodes found under /dev/disk/by-id:

# ls -l /dev/disk/by-id/
lrwxrwxrwx 1 root root 10 2011-01-06 11:42 dm-name-36006016088d014007e0d0d2213ecdf11 -> ../../dm-1
lrwxrwxrwx 1 root root 10 2011-01-06 11:42 dm-uuid-mpath-36006016088d014007e0d0d2213ecdf11 -> ../../dm-1
lrwxrwxrwx 1 root root  9 2011-01-05 14:40 edd-int13_dev80 -> ../../sda
lrwxrwxrwx 1 root root 10 2011-01-05 14:40 edd-int13_dev80-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 2011-01-05 14:40 edd-int13_dev80-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 2011-01-05 14:40 edd-int13_dev80-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 2011-01-05 14:40 scsi-36001c230ce31b9000eb0fa1e1cf986e7 -> ../../sda
lrwxrwxrwx 1 root root 10 2011-01-05 14:40 scsi-36001c230ce31b9000eb0fa1e1cf986e7-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 2011-01-05 14:40 scsi-36001c230ce31b9000eb0fa1e1cf986e7-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 2011-01-05 14:40 scsi-36001c230ce31b9000eb0fa1e1cf986e7-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 2011-01-06 11:42 scsi-36006016088d014007e0d0d2213ecdf11 -> ../../dm-1

The above output shows three names for the LUN (dm-name-<WWID>, dm-uuid-mpath-<WWID>, and scsi-<WWID>), which are all symlinked to the /dev/dm-1 device. Any of these devices can be used in the LVM configuration, but as some names can also be links to individual devices (such as the edd-* and scsi-<ID> devices, which symlink to /dev/sda*), using dm-name-*, or dm-uuid-mpath-* is recommended.

Given the above configuration, two possible recommended LVM filters are:

filter = [ "a|/dev/disk/by-id/dm-uuid-.*mpath-.*|", "r/.*/" ]
filter = [ "a|/dev/disk/by-id/dm-name-.*|", "r/.*/" ]

This filter specifically accepts the /dev/disk/by-id/dm-uuid-.*mpath-.*, and /dev/disk/by-id/dm-name-.* devices, and excludes everything else. (Note - the dm-uuid-.*mpath-.* syntax accepts both entire devices, and partitions on those devices.) As the filter is processed in order, ending the regular expression in a rejection of all devices ensure unexpected devices are not activated improperly.

After modifying the filter, rescan for LVM physical volumes (PVs) using `pvscan` (In SLES12 you need first to restart lvm2-lvmetad service). If the configuration is correct, the output should appear similar to the following:
# pvscan
  PV /dev/disk/by-id/dm-name-36006016088d014007e0d0d2213ecdf11   VG system_vg   lvm2 [50.00 GB / 0    free]
  Total: 1 [50.00 GB] / in use: 1 [50.00 GB] / in no VG: 0 [0   ]

If the PV is not found, or another error is returned, check the filter line in /etc/lvm/lvm.conf for typos, and re-run the scan. (Further information can be obtained using `pvscan -vvv`.) If the PV is found on the correct /dev/disk/by-id device, the rest of the LVM configuration should be refreshed by performing a `vgscan`, followed by `lvscan`. The configuration can then be further confirmed using dmsetup as follows:
# dmsetup ls --tree
system_vg-data_lv (253:1)
 └─36006016088d014007e0d0d2213ecdf11 (253:0)
    ├─ (8:32)
    └─ (8:48)

The above dmsetup output confirms that the 'system_vg' volume group is being activated through the device-mapper device node, which is then accessible through two distinct paths. (Note - The distinction between active and passive paths is not visible through this tool.)

The instructions for SLES9 or SLES10 are almost the same as the instructions for SLES11. The only difference is that the MPIO device nodes are found in the /dev/disk/by-name directory. Confirm the device names using the instructions listed above, and update the LVM filter accordingly.

Additional Information

When LVM is in use on local (non-SAN) devices, the filter must be modified to also scan those local devices. Further information on this topic can be found in TID 3617600: Using LVM on local and SAN attached devices.

Note: After LVM filter modification run "mkinitrd" to rebuild the initrd file with new configuration.

Information on configuring MPIO can be found in the SLES 11 Storage Administration Guide.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7007498
  • Creation Date:06-JAN-11
  • Modified Date:07-NOV-17
    • SUSESUSE Linux Enterprise Server

Did this document solve your problem? Provide Feedback