Changes

From Amahi Wiki
Jump to: navigation, search
1,652 bytes added ,  06:11, 12 May 2013
m
=Introduction=
There are several reasons why one would want a [http://en.wikipedia.org/wiki/RAID#RAID_5 RAID 5] configuration for your Amahi server if one has the money to buy at least 3 drives needed. The main reasons, however, are performance and redundancy. A RAID 5 array splits up a given file and its parity and distributes it across all the disks. The bad news is that if one has N disks, each with capacity C, the total array capacity is (N - 1) * C - one disk is lost to parity. On the other hand, a RAID 5 array has almost RAID 0 performance, and can survive the loss of one disk without loss of data. One thing that must be kept in mind with any RAID array (and one thing I didn't know about until it happened to me) is that RAID arrays are susceptible to data loss if not shut down correctly. A sudden power outage can result in data loss, so if a RAID array is used, it is important to use a [http://en.wikipedia.org/wiki/Uninterruptible_power_supply UPS].
 
Note: Depending on what you want to use your RAID 5 array for, you might want to consider using Greyhole over RAID 5. For some details about advantages and disadvantages of Greyhole vs RAID 5, see the [http://code.google.com/p/greyhole/#Greyhole_is_like_RAID-5,_but_better Greyhole project page].
Greyhole has been [[Greyhole|integrated in Amahi]] since version 5.3.
This wiki entry is for the installation of a software RAID 5 array using 3 disks that are the same size, 2 TB. At the end, the RAID array will be used to host all the shares on the server. My server has a separate hard drive on which I installed Fedora 12 and the Amahi software. The Amahi software should be installed and working correctly before the RAID array is created. Technically, it doesn't matter that Amahi is running first, but it makes it slightly easier to set up.
#*<blockquote>su -</blockquote>
#*<blockquote>fdisk -l</blockquote>
#*What you are looking for is probably going to be something like /dev/sd**some letter**. In my case, since I installed Fedora on one hard drive first before installing my 3 other disks RAID disks, the above command returns that I have disks labelled /dev/sda /dev/sdb /dev/sdc and /dev/sdd. I, of course, won't be using the hard drive on which I installed Fedora (/dev/sda), so my array will be created using the /dev/sdb /dev/sdc and /dev/sdd disks.
#Unmount the partitions:
#*<blockquote>umount /dev/sdb</blockquote>
#*This also can take a really long time, as well, but not nearly as long as creating the array itself.
#Unfortunately, Linux does not automatically remember your RAID settings, so you need to create a configuration file so that Linux knows that to do. Fortunately, you don't need to know (or type) the specifics, which mainly involves a really long alpha-numeric ID string:
#*<blockquote>mdadm --detail --scan --verbose > /etc/mdadm.conf</blockquote>
#Now, move all your shares to the RAID array:
#*<blockquote>mv /var/hda/files/* /dev/md0</blockquote>
#Unmount Create a mount point for the arrayArray:#*<blockquote>umount mkdir /devmnt/md0raid</blockquote>
#Edit the /etc/fstab file, so that the shares get mounted to the array on startup:
#*<blockquote>nano /etc/fstab</blockquote>
#*Edit the line containing '/var/hda/files' to read '/dev/md0 /var/hda/files ext4 defaults 1 2'.
#*NOTE: If you do not see the above then place the below on a new line:
#**<blockquote>/dev/md0 /mnt/raid ext4 defaults 1 2</blockquote>
#**If you used a different filesystem other than ext4, place that filesystem name in place of the 'ext4' above.
#Now, finally, mount your array:
That is it! Enjoy your new large storage array!
 
= How to Add New Drives to An Existing Raid5 =
 
As I had to find out, adding a drive to a RAID 5 is easy.
 
Adding the drive to the RAID
mdadm --add /dev/md# /dev/sd# (# is the number of the drive you adding)
 
Growing The RAID
mdadm --grow --raid-devices=# /dev/md$
(# is the total number of the drives in the array and $ is the RAID)
 
Now we let the RAID rebuild and we can watch it reshape by typing:
watch cat /proc/mdstat
It will take a couple hours for the drive to reshape (PLEASE do not reboot the machine during the reshaping process)
 
After the reshape finished successfully, we need to make sure that the mdadm.conf has the correct drive number in it.
 
nano /etc/mdadm.conf
 
Here we change the number of total drives you have in your array
It should look similar to this
 
Mdadm.conf written out by anaconda
MAILADDR root
ARRAY /dev/md0 level=raid5 num-devices=XX UUID=4cc02637:e3832bed:bfe78010:bc810f
(where XX needs to be changed to the actual number of drives you have in the array)
 
Save and close nano: [ctrl+x]
 
Reboot the machine after the reshape has finished and you made the changes to make sure the machine boots correctly