In my previous post I listed 10 steps to acquiring what I needed to do to feel comfortable in making a Linux File Server. This is Challenge 2 and 3 of 10: Breaking the Raid, and Add the Spare and Rebuild.
Breaking the RAID
In Virtuabox this is quite easy. I’ll shut down the VM and “detach” the third volume as if it is dead on boot. Before I do this I need to make sure I can confirm that everything is okay and that we can do what we need to do.
# mdadm --misc --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jan 22 22:35:54 2018 Raid Level : raid5 Array Size : 520192 (508.00 MiB 532.68 MB) Used Dev Size : 260096 (254.00 MiB 266.34 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Tue Jan 23 21:43:59 2018 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : eye-scrunchie:0 (local to host eye-scrunchie) UUID : e47e9e3a:8b2d2d70:430fa6dc:babf2503 Events : 20 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 3 8 49 2 active sync /dev/sdd1
So, we know we can see what we’ve got. Lets break this! I shutdown the VM, and “remove” the second disk in the RAID, and boot it back up.
# mdadm --misc --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jan 22 22:35:54 2018 Raid Level : raid5 Array Size : 520192 (508.00 MiB 532.68 MB) Used Dev Size : 260096 (254.00 MiB 266.34 MB) Raid Devices : 3 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Jan 23 21:56:28 2018 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : eye-scrunchie:0 (local to host eye-scrunchie) UUID : e47e9e3a:8b2d2d70:430fa6dc:babf2503 Events : 22 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 2 0 0 2 removed 3 8 33 2 active sync /dev/sdc1
Adding and Rebuilding
Now I happily have a degraded RAID5. I still can access the data on /data, which is the main intention. Lets see if we can mount our /dev/sdd1 (the extra “drive”) and rebuild our array.
# mdadm --add /dev/md0 /dev/sdd1 mdadm: added /dev/sdd1 # mdadm --misc --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jan 22 22:35:54 2018 Raid Level : raid5 Array Size : 520192 (508.00 MiB 532.68 MB) Used Dev Size : 260096 (254.00 MiB 266.34 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Tue Jan 23 22:12:20 2018 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : eye-scrunchie:0 (local to host eye-scrunchie) UUID : e47e9e3a:8b2d2d70:430fa6dc:babf2503 Events : 41 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 4 8 49 1 active sync /dev/sdd1 3 8 33 2 active sync /dev/sdc1
RMA’d Drive, adding Hot Spare
Now, I’m going to make and remount the previous drive. This should not break the array, but would be the same as adding in a new drive in the system on the same controller. This is the scenario in that I’ve replaced the originally dead drive cause of some RMA, and stuck it back in the system and am now going to make it a Spare.
# mdadm --misc --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jan 22 22:35:54 2018 Raid Level : raid5 Array Size : 520192 (508.00 MiB 532.68 MB) Used Dev Size : 260096 (254.00 MiB 266.34 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Tue Jan 23 22:18:28 2018 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : eye-scrunchie:0 (local to host eye-scrunchie) UUID : e47e9e3a:8b2d2d70:430fa6dc:babf2503 Events : 41 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 4 8 65 1 active sync /dev/sde1 3 8 49 2 active sync /dev/sdd1
Since it’s a fresh drive i’ll need to partition it to add it into the array.
# fdisk /dev/sdc ... c u n p 1 enter enter t fd p w ... # mdadm --add /dev/md0 /dev/sdc1 mdadm: added /dev/sdc1 # mdadm --misc --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jan 22 22:35:54 2018 Raid Level : raid5 Array Size : 520192 (508.00 MiB 532.68 MB) Used Dev Size : 260096 (254.00 MiB 266.34 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Tue Jan 23 22:22:07 2018 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : eye-scrunchie:0 (local to host eye-scrunchie) UUID : e47e9e3a:8b2d2d70:430fa6dc:babf2503 Events : 42 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 4 8 65 1 active sync /dev/sde1 3 8 49 2 active sync /dev/sdd1 5 8 33 - spare /dev/sdc1
Seems this was easy enough 🙂