Did it Work?
The wise administrator builds redundancy into the systems he manages. Using a redundant array of independent disks (RAID) is one way to build redundancy. A redundancy strategy is not a backup strategy. I would like to be clear that this article is not a replacement for a properly implemented backup strategy. I personally believe the best RAID solution is hardware RAID. However, when budgets are constrained, software RAID is an alternative. This article focuses on the process of installing the operating system onto a software RAID1 mirror.
The scenario is based on SUSE Linux Enterprise Server 10 Service Pack 1, and includes the base and Web and Lamp patterns only. The server should have two disks of the same size, 2Gb in this case. This is more for clarity than an actual requirement. Then you will create two partitions of type Linux RAID on each of the disks. The partitions need to be the same size on each disk. You will then create the software RAID device using the "Linux RAID" partitions you created earlier. Finally, you will format each software RAID device; one with swap, the other with a root file system. The scenario has been tested on the i386 and x86_64 architectures.
- Start the SLES10 SP1 install as you usually do, until you get to the Installation Settings screen, then click Change, Partitions.
- Click Create Custom Partition Setup and Next.
- Select Custom Partitioning and Next
- Create a 500MB Linux RAID (type 0xFD) partition for swap, and use the rest of the space for a Linux RAID (type 0xFD) partition for root.
- Create the corresponding partitions on the other disk.
- When you have finished creating all the Linux RAID partitions, your Expert Partitioner should look like Figure 5 below. Notice there is a partition of equal size on each disk that serves as a place holder for swap and root. Now you will assemble the mirror using each of these double partitions.
Select RAID and Create RAID.
- Select RAID 1 (Mirroring), and Next.
- The current RAID device should be /dev/md0. Add both 500MB Linux RAID partitions to the md0 RAID.
- Once you have added md0 to both 500Mb partitions, click Next.
- Format the /dev/md0 device with a swap file system, and click Finish.
- Now repeat the previous four steps for the root partition. First, select RAID and Create RAID.
- Select RAID 1 (Mirroring) and Next.
- The current RAID device should be /dev/md1. Add both remaining Linux RAID partitions to the md1 RAID, and click Next.
- Format the /dev/md1 device with a reiser file system, mounted on /; and click Finish.
- When your done, the partitioner screen should look something like this:
I created only two system partitions with a Linux RAID partition mirrored for each. If you have more system partitions, like /var, then you will need to have a pair of Linux RAID partitions and a /dev/md? device for each additional system partition.
- Click Finish.
- Notice that the boot loader section references the /dev/md? devices.
- Complete the installation as usual.
- Once the installation is complete, you need to finish the GRUB install. Since GRUB does not understand MD devices, it is only installed on the first disk. I like to make sure it is installed the same way on both disks.
- Login as root, and type "grub". Follow the steps in Figure 16 below.
Did it Work?
This section is intended to show what the system should look like using various commands after a successful install. Troubleshooting failed installs or partial installs is outside the scope of this article.
- Check the /etc/fstab file to ensure the software RAID devices are used to mount root and swap.
raid1:~ # cat /etc/fstab
/dev/md1 / reiserfs acl,user_xattr 1 1
/dev/md0 swap swap defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
debugfs /sys/kernel/debug debugfs noauto 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
/dev/fd0 /media/floppy auto noauto,user,sync 0 0
The /proc/mdstat shows the current status of the array. I will also show you the current state of a sync in progress. It is a brief version of the mdadm --detail output.
raid1:~ # cat /proc/mdstat
Personalities : [raid1] [raid0] [raid5] [raid4] [linear]
md1 : active raid1 sda2 sdb2
1582336 blocks [2/2] [UU]
md0 : active(auto-read-only) raid1 sda1 sdb1
513984 blocks [2/2] [UU]
unused devices: <none>
The /proc/cmdline shows the kernel command line configured at boot time. The source for these parameters is the /boot/grub/menu.lst and whatever is typed on the "Boot Options" line in the GRUB menu at boot time. The value in this case is to see that the root= parameter is pointing to the mirrored RAID array.
raid1:~ # cat /proc/cmdline
root=/dev/md1 vga=0x332 resume=/dev/md0 splash=silent showopts
The mdadm command provides the most complete view of the software RAID array. The state should be "clean" and "active sync" for both devices in the mirror.
raid1:~ # mdadm --detail /dev/md1
Version : 00.90.03
Creation Time : Tue Mar 4 01:22:48 2008
Raid Level : raid1
Array Size : 1582336 (1545.51 MiB 1620.31 MB)
Used Dev Size : 1582336 (1545.51 MiB 1620.31 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Thu Mar 6 18:56:46 2008
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 3858d3c1:b6e37b6d:8e2de91f:260fb55b
Events : 0.5205
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
Installing the SLES operating system is rather straight forward. The only real catch is to make sure you install GRUB onto both disks when the install is complete. The mirrored array provides redundancy for the system disk. If one of the disks becomes damaged or fails, then you can recover quickly using the other mirrored disk. However, always make sure you have a current working backup.
Disclaimer: As with everything else at Cool Solutions, this content is definitely not supported by Novell (so don't even think of calling Support if you try something and it blows up).
It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test, test, test before you do anything drastic with it.