Avoid Bootloader Trouble When Using Raid1
Novell Cool Solutions: Trench
By Ken Chase
Digg This -
Posted: 4 Apr 2005
If you are trying to do a Raid 1 (mirrored) root install with grub or even lilo, this will really help you:
- Choose your raid disks for root (smaller root partition is faster sync).
- Go through your install.
- When it's time to install grub/lilo, check out the console shell and do cat /proc/mdstat to make sure the raid 1 is synced before proceeding.
- Continue with grub install - now the disk is synched and can find the relevant files for grub (or lilo).
Alternately, if you are allowed to proceed in YaST with 1 disk in a raid 1:
- Choose your raid disk for raid 1 root (only one disk, add other later).
- Go through your install as per normal to the end.
- Get a working system, add the other disk to the degraded raid 1 array and let it sync.
Now you have root on a proper raid 1 disk.
You could also convert a single regular root disk install to a raid 1 disk, but I've not used the SUSE Linux tools for this 'conversion'. It would probably be the easiest in the end, if this is possible. Manually it's possible; just change partition to type 0xfd, "mdadm --create /dev/md0 --raid-devices 2 --level 1 /dev/root_partition /dev/other_partition_of_type_0xfd"
----------------------------------------- Details of the situation:
My raid 1 root install didn't work at first, as warns YaST installer "you might not be able to access your boot device with software raid root partition". No kidding. :)
With a raid 1 root, which I did with sda1 and sdp1 (drives at opposite ends of the array and on different controllers, in case one dies), the problem is that the grub installer wants to find the stage1/2 files and a number of other things on a physical disk that you pick for boot. (BIOS can't boot grub off a raid disk, it doesn't grok raid - though a hardware raid controller can, but that's a different situation). I picked sda1, the first booting disk on the system.
When you make a raid with sda1/sdp1, no matter what you do (and I even tried adding sdp1 first to the raid array in the YaST raid partitioning tool to reverse the order, but it didn't help :), sdp1 becomes the 'master' of the raid 1 image, and sda1 is the one that's marked faulty. I am not sure if this is mdadm's fault (if YaST even uses it behind the scenes) or what, but only a manual degraded build of a raid 1 with sda1 first, then a hot-add of sdp1 would ensure the 'master' vs 'faulty' disks.
So because sda1 doesn't have a real filesystem on it, because it's being synched by the kernel raid in the background, when grub (or lilo) goes to find files on the physical disk you choose for grub (hd 0,0 for me), there are "No Such Files" of course.
What's worse, if you TRY to install grub before sda1 has synched itself into the array, it stops rebuilding as soon as you touch the disk with the grub installer, and it's marked failed. You must open the console shell, mdadm --manage /dev/md0 --remove /dev/sda1 && mdadm --manage /dev/md0 --add /dev/sda1 again, and sit and twiddle thumbs till it rebuilds again from start. *sigh* (This is where a small root partition / fast disks is nice.)
Once it's built though, you can install grub fine - it finds the files at the right sector/location/whatever, and everything is go. It reboots fine, and you're set.
Technically, you COULD just add ONE disk yourself to the raid config and leave it in degraded mode, then add it manually later on (perhaps via YaST, not sure what it gives you later for raid management once installed) -- but I'm not sure if the installer will let you continue with just one disk in a raid 1 mirror. Obviously in this case if this is allowed, choose your boot partition (for me sda1 here), then everthing should work without waiting for the resync because sda1 is the 'master' disk of the raid 1 image and the grub installer will find the files it needs in the right place.
IMHO, the installer should be aware of this situation and build a degraded raid 1 to start with, on sda1 only (the chosen root drive), then on the second reboot, when the machine is likely to stay up for longer, add the other partition and let the raid rebuild (and the coup de grace would be YaST reminding you that you might like to install a second grub image on the other root raid so you have a bootable system even with the first drive completely dead - but only of course once the mirror is built - a progress bar of some sort in YaST would be great - even just piping /proc/mdstat in would work fine :).
In my case, this is all in aid of setting up an EVMS-ified system, but the EVMS configurator under YaST is totally stripped of any of the features you'd want EVMS for. Get a basic system working and convert it to EVMS later. EVMS can eat any LVM/LVM2/raid setup you have easily, and convert it to something else (or even leave it useable as a compatibility volume under EVMS, at the cost of a few EVMS features).
Novell Cool Solutions (corporate web communities) are produced by WebWise Solutions. www.webwiseone.com