Yays! Fun stuff in route!
Part of my 2018 goals is to be able to have a universally-accessible resource where I can be able to store all the data that I need to, and more, without worry of fault or loss. One of the ways to approach this is to create a File Server that I can mount from a majority of operating systems and be able to store anywhere in the world. So, I’ve come to the conclusion to build a Linux File Server, complete with a Software RAID5 (as opposed to a hardware RAID5), and make it so that we can use iSCSI to mount the LUNs.
We need to do this on a budget, too.
Our Wants and Needs
We need to idealize and setup what we need our beast to do. We’re gonna need some scenarios where things go down and we need to be able to kick it back into shape:
- Create the RAID
- Break the RAID
- Add the Spare and Rebuild
- Setup and configure iSCSI Target
- Mount the LUN from a separate computer
- Break the RAID, again
- Verify that we can still create/make files
- Rebuild the RAID with the Spare
- Reinstall Core Operating system, configure it up, and mount and use /md
- Install different OS, configure it up, mount and use/md
So, we’ve got a good base on keeping data kept for years to come 🙂
Virtualize to validate
It’s best, before we do anything with any purchases, to go the free and open route. Hence, we’ll virtualize in Oracle Virtuabox.
We’ll start with a base Centos 6.9 setup in Virtuabox that I’ve amicably called eye-scrunchie. I’ve provided the OS with 8GB for storage for / and /boot. I’ve created an additional 4 SATA devices, all with 256MB of space. I don’t need to create a TON of space on them, but I do need enough to create a base RAID5 w/ 1 Spare. To not over-complicate configuration anymore I’ve provided 1 Ethernet Port bridged to my onboard NIC.
Setup Base System
After installing and yum update‘ing the base system, i’ve done the typical iptables and selinux “configurations” (aka shutting them all downs):
# yum update # service iptables save # service iptables stop # chkconfig iptables off # cat /etc/selinux/config | sed s/=enforcing/=disabled/ > /etc/selinux/config.new && rm /etc/selinux/config && mv /etc/selinux/config.new /etc/selinux/config # shutdown -r now
We’re gonna use a guide to help us out, but not follow it 100% since we have an idea of what we want to use from it: [archlinux:Software RAID and LVM]
Setup Disks
Check if we have the the kernel module for this:
# modprobe raid5
Now, we need to verify that the OS sees the disks, and create partitions on them all. We’re going to use fdisk for now, but for larger volumes later on I might use a different program. For now, I just need to be able to make the necessary partitions. We’re working with /dev/sdb, /dev/sdc, /dev/sdd, and /dev/sde will be our “recovery spare”.
# fdisk /dev/sdb Command (m for help): : c DOS Compatibility flag is not set Command (m for help): : u Changing display/entry units to sectors Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First sector (2048-524287, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-524287, default 524287): Using default value 524287 Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/sdb: 268 MB, 268435456 bytes 255 heads, 63 sectors/track, 32 cylinders, total 524288 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0005d307 Device Boot Start End Blocks Id System /dev/sdb1 2048 524287 261120 fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: If you have created or modified any DOS 6.x partitions, please see the fdisk manual page for additional information. Syncing disks.
In Short: c u n p 1 enter enter t fd p w
And performed the same with the other 3 disks.
Setup RAID
This seems to be cut-and-dry:
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sd[bcd]1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
Checking on the status:
# mdadm --misc --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jan 22 22:35:54 2018 Raid Level : raid5 Array Size : 520192 (508.00 MiB 532.68 MB) Used Dev Size : 260096 (254.00 MiB 266.34 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Mon Jan 22 22:35:58 2018 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : eye-scrunchie:0 (local to host eye-scrunchie) UUID : e47e9e3a:8b2d2d70:430fa6dc:babf2503 Events : 18 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 3 8 49 2 active sync /dev/sdd1
and make it persistent:
# mdadm --examine --scan > /etc/mdadm.conf
Format + Mount
Format with ext4:
# mkfs.ext4 /dev/md0 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=512 blocks, Stripe width=1024 blocks 130048 inodes, 520192 blocks 26009 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67633152 64 block groups 8192 blocks per group, 8192 fragments per group 2032 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 36 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
…mount it as /data
# mount -t ext4 /dev/md0 /data
and add it to /etc/fstab
...data... /dev/md0 /data ext4 defaults 0 0
And now we’ve concluded #1: “Create the RAID”