Partition used: (these are small test Partitions)
/dev/hdc5 14725 14726 16033+ 83 Linux
/dev/hdc6 14727 14728 16033+ 83 Linux
/dev/hdc7 14729 14730 16033+ 83 Linux
/dev/hdc8 14731 14732 16033+ 83 Linux
( the real array I built has uses /dev/hdc1 /dev/hdd1 .... and each
partition takes up the entire 160Gb drive. )
(create array)
# mdadm --create /dev/md1 --level=6 --raid-devices=4 /dev/hdc5 /dev/hdc6 /dev/hdc7 missing
mdadm: array /dev/md1 started.
"missing" leaves a drive missing. Raid 6 lets me do as many as 2
missing. Raid 5 lets me set one drive missing. I found this useful for
building an array when some of the drives I intended to use are part of
the old array I intend to replace.
what is /dev/md1 ?
/dev/md# is a device that represents the multiple device drive
(a logical device composed of many other devices)
you create one for each raid array you create. I am using /dev/md1
because /dev/md0 was already used.
to restart (assemble an existing array )
#/sbin/mdadm -A /dev/md1
(this reads /etc/mdadm.conf to figure our which devices are used to
create the array )
(check status)
# mdadm --query /dev/md1
/dev/md1: 31.13MiB raid6 4 devices, 0 spares. Use mdadm --detail for more detail.
/dev/md1: No md super block found, not an md component.
The "No md super block found.." message is not an error.
The raid device itself is not a member of a raid device, so
it does not have such a block. mdadm will find this block if you
call mdadm on a device within the array.
# mdadm --query /dev/hdc5
/dev/hdc5: is not an md array
/dev/hdc5: device 0 in 4 device inactive raid6 md1. Use mdadm --examine for more detail.
(check status verbose)
# mdadm --detail /dev/md1
to create file system on raid drive i used:
# mke2fs -b 4096 -R stride=64 -J size=400 /dev/md1
( this journal is actually to big for this test array, I used 4 here)
-R stride=64 is supposed to get the block size to work out evenly with
the raid array.
to "hot add" a drive to the Array.
# mdadm /dev/md1 -a /dev/hdc8
( this fills in for a missing drive or becomes a spare )
to fail a drive. (useful for testing )
# mdadm /dev/md1 -f /dev/hdc8
mdadm: set /dev/hdc8 faulty in /dev/md1
to remove the failed drive
# mdadm /dev/md1 -r /dev/hdc8
mdadm: hot removed /dev/hdc8
to start a monitoring process ( sends emails to root if errors occor)
# mdadm --monitor /dev/md1 &
to mount the drive: ( you need to have created the filesystem previously)
# mount /dev/md1 /backup
to shut down (stop) a raid array:
# mdadm /dev/md1 -S
( it must be unmounted first )
Replys:
|