Root Partition with IDE RAID HOWTO
This is biased towards Suse 9.1 which is the distro I used for this.
A lot of the text was taken from:
http://www.linuxsa.org.au/mailing-list/2003-07/1270.html
Configuration:
- /dev/hda (Pri. Master) 200GB
- /dev/hdb (Pri. Slave) 120GB
- /dev/hdc (Sec. Master) 120GB
- /dev/hdd (Sec. Slave) CDROM Drive
Setup Goals:
- /boot as /dev/md0: RAID1 of /dev/hdb1 & /dev/hdc1 for redundancy
- / as /dev/md1: RAID1 of /dev/hdb2 & /dev/hdc2 for redundancy
- swap*2 with equal priority: /dev/hdb3 & /dev/hdc3
- GRUB installed in boot records of /dev/hdb and /dev/hdc so either
drive can fail but system still boot.
Tools:
- mdadm (http://www.cse.unsw.edu.au/~neilb/source/mdadm/)
I used the Suse 9.1 Pro rpm of mdadm
1. Boot up off rescue/installation CD/disk/HDD/whatever with mdadm
tools installed. I booted from hdc which had a simple basic Suse 9.1
install on it.
2. Partitioning of hard drives:
(I won't show you how to do this. See: # man fdisk ; man sfdisk )
Here is the disk partition config:
------------------------------------------------------------------
#sfdisk -l /dev/hdb
Disk /dev/hdb: 238216 cylinders, 16 heads, 63 sectors/track
Units = cylinders of 516096 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/hdb1 0+ 194 195- 98248+ fd Linux raid autodetect
/dev/hdb2 195 19571 19377 9766008 fd Linux raid autodetect
/dev/hdb3 19572 21665 2094 1055376 82 Linux swap
/dev/hdb4 21666 238215 216550 109141200 5 Extended
/dev/hdb5 21666+ 176676 155011- 78125512+ fd Linux raid autodetect
To make /dev/hdc the same:
------------------------------------------------------------------
# sfdisk -d /dev/hdb | sfdisk /dev/hdc
------------------------------------------------------------------
/dev/hd[bc]1 for /dev/md0 for /boot
/dev/hd[bc]2 for /dev/md1 for /
/dev/hd[bc]3 for 2*swap
It is important to make md-to-be partitions with ID 0xFD, not 0x83.
3. Set up md devices: (both are RAID1 [mirrors])
------------------------------------------------------------------
# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hdb1 /dev/hdc1
# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/hdb2 /dev/hdc2
4. Make filesystems:
------------------------------------------------------------------
# mke2fs -j /dev/md0
# mke2fs -j /dev/md1
# mkswap /dev/hdb3
# mkswap /dev/hdc3
------------------------------------------------------------------
5. Install Your distribution:
Simply treat /dev/md0 and /dev/md1 as the partitions to install on,
and install the way your normally do.
Eg for Suse 9.1 I ran the install from the net install CD and
told it to use the partitions that we already created.
example fstab:
/dev/md1 / ext3 defaults 1 1
/dev/md0 /boot ext3 defaults 1 2
/dev/md2 /samba ext3 defaults 0 2
/dev/hdb3 swap swap pri=42 0 0
/dev/hdc3 swap swap pri=42 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
proc /proc proc defaults 0 0
usbfs /proc/bus/usb usbfs noauto 0 0
sysfs /sys sysfs noauto 0 0
/dev/fd0 /floppy auto noauto,owner,user 0 0
/dev/cdrom /cdrom iso9660 noauto,owner,user,ro,exec 0 0
6. Setting up GRUB: (assuming you've already installed it)
I used the Suse 9.1 Pro grub rpm.
Boot from hda (the extra disk) otherwise this will not work. Or from a CD
for that matter. Or you can even do this from the grub command line.
------------------------------------------------------------------
# grub
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... yes
Checking if "/boot/grub/stage2" exists... yes
Checking if "/boot/grub/e2fs_stage1_5" exists... yes
Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 16 sectors are
embedded.
succeeded
Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p
(hd0,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded
Done.
grub> root (hd2,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd2)
Checking if "/boot/grub/stage1" exists... yes
Checking if "/boot/grub/stage2" exists... yes
Checking if "/boot/grub/e2fs_stage1_5" exists... yes
Running "embed /boot/grub/e2fs_stage1_5 (hd2)"... 16 sectors are
embedded.
succeeded
Running "install /boot/grub/stage1 (hd2) (hd2)1+16 p
(hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded
Done.
grub> quit
Note: to find out what disks grub thinks you've got and how they are
labelled:
geometry (hd0)
geometry (hd1)
geometry (hd2)
etc.
will tell you.
Here is my /boot/grub/menu.lst:
========================================
timeout 8
# By default, boot the first entry.
default 0
# Fallback to the second entry.
fallback 1
# Fallback to the third entry.
fallback 2
title Linux (hd0)
kernel (hd0,0)/vmlinuz root=/dev/md1 splash=silent acpi=off desktop hdd=ide-cd
initrd (hd0,0)/initrd
title Linux (hd1)
kernel (hd1,0)/vmlinuz root=/dev/md1 splash=silent acpi=off desktop hdd=ide-
cd
initrd (hd1,0)/initrd
title Linux (hd2)
kernel (hd2,0)/vmlinuz root=/dev/md1 splash=silent acpi=off desktop hdd=ide-
cd
initrd (hd2,0)/initrd
title Failsafe (hdb)
kernel (hd0,0)/vmlinuz root=/dev/md1 ide=nodma apm=off acpi=off vga=normal noresume nosmp noapic maxcpus=0 3
initrd (hd0,0)/initrd
title Failsafe (hdc if hdb unreadable)
kernel (hd1,0)/vmlinuz root=/dev/md1 ide=nodma apm=off acpi=off vga=normal noresume nosmp noapic maxcpus=0 3
initrd (hd1,0)/initrd
title Memory_Test
kernel (hd0,0)/memtest.bin
title Memory_Test
kernel (hd1,0)/memtest.bin
========================================
Do test the computer afterwards. I just removed an IDE cable from a disk
while it was running, then booted the computer. It was fine and:
mdadm /dev/md1 -a /dev/hdc2
mdadm /dev/md0 -a /dev/hdc1
performed a hot add to each RAID device.
I could not get the mdadm monitor functionality to work very well under
testing, so instead just use a bit of Perl to examine /proc/mdstat
for what I think should be there and send an email if it does not look
good.
|