Difference between revisions of "RAID 1"

From Amahi Wiki
Jump to: navigation, search
 
(11 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 +
{{NeedsUpdate}}
 +
 +
 
Installing Amahi with RAID 1 in 2HDD setup. See this other page for [[RAID5]].
 
Installing Amahi with RAID 1 in 2HDD setup. See this other page for [[RAID5]].
  
 
Also see another users setup of [[#RAID 1 with 3 HDDs]].
 
Also see another users setup of [[#RAID 1 with 3 HDDs]].
 +
 +
If you perform either of these setups and have trouble with one of the members of the RAID1 array not joining on reboot, try this possible solution. [[#Losing drive on reboot]]
 +
 +
________________________________________________________________________
  
 
So, to set the scene.  I'm pretty much totally new to Linux and I'm just an enthusiastic amateur when it comes to computers in general.
 
So, to set the scene.  I'm pretty much totally new to Linux and I'm just an enthusiastic amateur when it comes to computers in general.
Line 186: Line 193:
  
 
Please add any other comments/corrections/suggestions as you see fit, and thanks to the first user who started this section and inspired to me ccontribute as well.
 
Please add any other comments/corrections/suggestions as you see fit, and thanks to the first user who started this section and inspired to me ccontribute as well.
 +
 +
=<span id="reboot_fix">Losing drive on reboot</span>=
 +
 +
If you perform either of the above setups and find that on reboot one member of the array is not present and the command "mdadm --detail /dev/md0" shows that the array is degraded, the following may help.
 +
 +
In a Fedora forum thread - http://www.fedoraforum.org/forum/showthread.php?t=255048 it was found that some makes of hard drives can cause this problem as they take a while to spin up so the system ignores one of them and carries on with the degraded array.
 +
 +
To fix this, apply the recommended lines to your /etc/rc.d/rc.local file. Make sure you backup the file before doing this just in case and do this all as root.
 +
 +
 +
== This topic describes LVM (Logical Volume Management) mirroring in Linux in easy to follow steps. ==
 +
 +
'''difficulty: medium/hard'''
 +
 +
The starting point of this information is that you already have a working install of a Linux distribution.
 +
You already have LVM set up on 1 disk, and you're using the full disk capacity in a single logical volume.
 +
 +
Now let's say you want a little more redundancy, and you don't have the option of creating the logical volume from scratch with Linux software RAID (which is preferable over LVM mirroring, I'll explain later). You buy a new disk with at least the same capacity as your old disk.
 +
 +
In this tutorial I will use the volume group name "vgdata" and logical volume name "lvdata"
 +
 +
=== Adding a new disk to an existing Volume Group ===
 +
warning: I will explain adding a __3rd__ hard drive (/dev/sdc) and we will have the following settings:
 +
* /dev/sda is partitioned for the linux installation
 +
* /dev/sdb is partitioned and used for LVM for a single datavolume (/dev/vgdata/lvdata)
 +
* /dev/sdc is to be added to volume group vgdata to make sure there is enough space for mirroring
 +
 +
After physically adding the disk we need to add it to the volume group.
 +
login as root
 +
list your current partitions with:
 +
fdisk -l
 +
now let's create a partition on the 3rd hard drive:
 +
fdisk /dev/sdc => you will enter a menu
 +
n => create new partition
 +
p => make partition primary
 +
1 => this is the partition number, tap <enter> twice to create a partition  that fills your complete hard drive
 +
t => this is the filesystem type, choose or type 8e for linux extended (LV)
 +
w => write changes to disk
 +
 +
now we need to create a physical volume:
 +
pvcreate /dev/sdc1
 +
next we need to add this physical volume to the volume group vgdata:
 +
vgextend vgdata /dev/sdc1
 +
with the vgdisplay command you can see the details of your volumegroup:
 +
vgdisplay -v vgdata
 +
 +
You should see that you have the added disk space in the volume group.
 +
Now to get to the mirroring:
 +
lvconvert -m1 --mirrorlog core /dev/vgdata/lvdata
 +
=> this will create a mirror with 1 spare disk (-m1) with the mirrorlog kept in memory (--mirrorlog core) on the selected volume group. The process will (depending on your distro) display it's progress on your terminal.
 +
 +
If you have a large volume this will take a while (my 1.6TB on 2 primary SATA2 disks with 2 SATA2 spares took around 8 hours to complete).
 +
 +
If you still want to monitor the progress (or state) of the LVM mirror you can issue the following command:
 +
lvs -ao +devices
 +
which should give you a similar output to this:
 +
 +
LV                VG    Attr  LSize Origin Snap%  Move Log Copy%  Convert Devices                             
 +
lvdata            vgdata mwi-ao 1.0T                        100.00          lvdata_mimage_0(0),lvdata_mimage_1(0)
 +
  [lvdata_mimage_0] vgdata iwi-ao 1.0T                                        /dev/sdb1(0)                       
 +
  [lvdata_mimage_1] vgdata iwi-ao 1.0T                                      /dev/sdc1(0)
 +
 +
The Copy% column will display the % of data that is mirrored, in the above case this is 100%.
 +
 +
Now if one of the disks fails LVM will silently convert the mirrored volume to a so called linear (normal) volume so your data will still be accessible.

Latest revision as of 22:37, 14 September 2015

Msgbox.update.png Update Needed
The contents of this page have become outdated or irrelevant. Please consider updating it.


Installing Amahi with RAID 1 in 2HDD setup. See this other page for RAID5.

Also see another users setup of #RAID 1 with 3 HDDs.

If you perform either of these setups and have trouble with one of the members of the RAID1 array not joining on reboot, try this possible solution. #Losing drive on reboot

________________________________________________________________________

So, to set the scene. I'm pretty much totally new to Linux and I'm just an enthusiastic amateur when it comes to computers in general.

I'm setting up Amahi to use as a server for home and and home office for a new business I'm setting up. So it'll mainly store music and film files for home use and documents for work which will be accessed through a VPN. I may also use it to host a CRM system later.

I wanted to record this partly for myself, so when I realise I've gone wrong and have to do it all again I can see where, but also as my way of contributing. I've relied so much on others putting info up on fora, wikis etc. I hope this will be of use to someone else.

What I'm using:

  • Router with internet connection providing DHCP, connected to
  • Server (in my case an HP Proliant ML110 G5; Intel Xeon Dual Core 3065 @ 2.33 GHz; 3RG RAM)
  • 2 x 250GB Hard disk drives (in the server)
  • Keyboard and mouse attached to server
  • Fedora 10 DVD
  • Laptop to read instructions from (and type out these notes)
  • Registration details from the Amahi site.

Sites I'm using for info:

N.B. This installation is to install a completely fresh system with no dual-boot. All data on the drives will be wiped.

Installing Fedora 10

  1. Insert the Fedora disk into the server's DVD drive.
  2. Turn the server on.
  3. Select 'install or upgrade an existing system'.
  4. When asked, test the media (DVD) - just to make sure you won't get any surprises with it hanging on a dodgy disk.
  5. When the Screen comes up, click 'Next'.
  6. Chose your language.
  7. Chose your keyboard.
  8. Set a name for your computer and domain, or leave this as the default; click 'Next'.
  9. Select your time zone; click 'Next'.
  10. Set a root password (make a note of it); click 'Next'.
  11. Creating RAID partitions
  12. Select 'create custom layout'; click 'Next'.
  13. If your drives are already partitioned, delete all the various partitions, by selecting them and clicking 'delete'.
  14. Click 'RAID'.
  15. Select 'Create a software RAID partition.'; click 'OK'.
  16. Set the following options:
    1. File system type: Software RAID
    2. Allowable drives: tick the first (in my case 'sda')
    3. Size: 100 MB (300MB for Fedora 12)
    4. Additional size options: Fixed size
    5. Force to be primary partition: tick
    6. Click 'OK'.
  17. Repeat step 15, but in 15.2 tick the second drive (sdb)
  18. Click 'RAID'.
  19. Select 'Create a RAID device'; click 'OK'.
  20. Set the following options:
    1. Mount point: /boot
    2. File system type: ext3
    3. RAID device: md0
    4. RAID level: Level 1
    5. Raid members: tick both drives
    6. Click 'OK'.
  21. Select the 'Free space' on your first hard drive.
  22. Click 'New'.
  23. Set the following options:
    1. Mount point: [just leave this]
    2. File System Type: swap
    3. Allowable drives: tick the first
    4. Size: 2 x RAM (in my case 2 x 3GB = 6,000MB)
    5. Additional size options: Fixed size
    6. Click 'OK'
  24. Repeat step 22, but ticking the second drive in 22.3
  25. To set up each of the extra partitions on the drive repeat steps 15 to 19 (once for each new partition). These are the changes to make each time:
    1. Change sizes in 15.3 The sizes I have used are:
      1. /boot 100MB (300MB for Fedora 12)
      2. /usr 20000MB
      3. / 8000MB
      4. /var 5000MB
      5. /tmp 5000MB
      6. /home 10000MB
      7. /var/hda/files (what was left - this is where Amahi stores all your shared folders)
    2. Do not force to be primary partition in 15.5
    3. Use whatever RAID device is the default in 19.3
  26. Click 'Next'
  27. Click 'write changes to disk'. THIS WILL WIPE ALL YOUR DATA ON YOUR DISKS.
  28. Completing Fedora install
  29. When asked, tick 'Install boot loader on /dev/md0'; Click 'Next'.
  30. Deselect Office and Productivity, unless you want it (e.g. to simultaneously use your HDA as a desktop)
  31. Click on the "Add additional software repositories" button. If asked, enable your network interface.
  32. Add the Amahi repository with the following information:
    1. Repository name: amahi
    2. Repository URL: http://f10.amahi.org (http://f12.amahi.org for Fedora 12)
  33. I've also added the additional repository shown to me:
    1. Installation Repo
  34. Proceed by clicking "Next" to the completion of the Install.
  35. Wait while all that downloads and installs.
  36. Take the Fedora DVD out of the drive.
  37. For Fedora 12 we need to make our /boot partitions bootable in fdisk
    1. Press Ctrl+Alt+Shift+F2 on your keyboard to get a shell prompt
    2. Type fdisk /dev/sda followed by the Enter key
    3. Type p followed by the Enter key
    4. Type a followed by the Enter key
    5. Type the number of your first linux raid autodetect partition followed by the Enter key
    6. Type w followed by the Enter key
    7. Repeat for /dev/sdb
  38. Click 'Reboot'.
  39. Follow the on screen instructions.
  40. Create a new (non-root) user, giving a password (write it down); Click 'Forward'.
  41. Set the date and time; Click 'Forward'.
  42. Select 'send profile'; Click 'Finish'.
  43. Log in using the username and password you created in step 37.
  44. Install Amahi
  45. Open a terminal (Applications > Sytem Tools >Terminal).
  46. Type the following commands (press [Enter] after each):
    1. su -
    2. [your root password]
    3. hda-install [the code Amahi gave you]
  47. If SELinux throws up any errors (shows what looks like a sheriff's star in the top bar, double-click it) read each one and follow the instructions.
  48. Reboot (may not be necessary - but I was having trouble so decided to reboot after each stage to figure out what the problem was).

  49. Make both HDDs bootable (so if one fails you have an exact copy - that's the point of RAID isn't it?)
  50. Log in.
  51. Open a terminal window.
  52. Type the following commands (press [Enter] after each):
    1. su -
    2. [your root password]
    3. grub
    4. device (hd0) /dev/sda
    5. root (hd0,0)
    6. setup (hd0)
    7. device (hd1) /dev/sdb
    8. root (hd1,0)
    9. setup (hd1)
  53. Close the terminal window.
  54. Check RAID is working
  55. You can test whether RAID is working by shutting down the server, disconnecting on of the drives and seeing whether it boots when you turn it back on. If it does, shut it down again, disconnect the other drive and reconnect the first one and try to boot from that drive. If it boots from both, everything is ok.
  56. Log back in to your system and check for and install any other updates (go to System > Administration > Update System then follow the instructions to install the updates you want).
  57. Once everything has updated, reboot the server.
  58. Once you've logged back in to Fedora, open another terminal.
  59. Become root, by typing:
    1. su -
    2. [enter your root password]
  60. You can now check your RAID drives by typing:
    1. mdadm --detail /dev/md0 [then add another /dev/md... for each extra RAID you set up - so for me I went from md0 to md5]

    Set up monitoring of RAID

  61. You can then set up monitoring so that it will email you if anything goes wrong by typing:
    1. mdadm --monitor --scan --mail=you@domain.com delay=3600 --daemonise /dev/md0 /dev/md1 /dev/md2 /dev/md3 /dev/md4 /dev/md5
  62. THIS STEP (56) HAS BEEN CAUSING MY COMPUTER TO HANG DURING BOOT, SO I HAVEN'T IMPLEMENTED IT IN THE END. You may choose to use it (hopefully someone with more understanding than me will see what I've got wrong and change this page to corect it). To ensure that this is running whenever you start the server apparently this should be put into /etc/rc.local. You can do this by entering:
    1. nano /etc/rc.local
    2. In the window that opens add the following line at the end: mdadm --monitor --scan --mail=you@domain.com delay=3600 --daemonise /dev/md0 /dev/md1 /dev/md2 /dev/md3 /dev/md4 /dev/md5
    3. Save and Exit nano
  63. Reboot
  64. Now you can go to http://hda to set up your Amahi server.


I hope this helps, but please feel free to correct anything I've got wrong. I've really been feeling my way here, so I can't guarantee that any of the above will work or work seamlessly, but I hope it will.

RAID 1 with 3 HDDs

I'm another user who's also using RAID 1and wanted to give back a bit to the community, so I thought I'd describe my setup.

I have 3 HDDs: sda is a 40GB sdb and sdc are 1.5 TB each

I reinstalled everything to create the RAID 1 array and to update my system to Fedora 12 (so make a backup of everything first!). With all three drives installed, during the partitioning stage of the Fedora install I did the following (see steps 11-19 above for more details):

sda - formatted with a / partition and swap

sdb - click on RAID, create a software RAID partition, made it the maximum size of the disk

sdc - same as for sdb

I then created a single RAID 1 volume using sdb and sdc and mounted it as /var/hda/files

With this done, you proceed with the rest of the Fedora/Amahi instalation as per the Amahi documentation.

With this setup, all my personal files and media will be stored on my RAID 1 array, while the Fedora and Amahi filesystems and programs will be on sda primary drive. This allows you to update Fedora and/or Amahi with much less risk to your personal files.

Please add any other comments/corrections/suggestions as you see fit, and thanks to the first user who started this section and inspired to me ccontribute as well.

Losing drive on reboot

If you perform either of the above setups and find that on reboot one member of the array is not present and the command "mdadm --detail /dev/md0" shows that the array is degraded, the following may help.

In a Fedora forum thread - http://www.fedoraforum.org/forum/showthread.php?t=255048 it was found that some makes of hard drives can cause this problem as they take a while to spin up so the system ignores one of them and carries on with the degraded array.

To fix this, apply the recommended lines to your /etc/rc.d/rc.local file. Make sure you backup the file before doing this just in case and do this all as root.


This topic describes LVM (Logical Volume Management) mirroring in Linux in easy to follow steps.

difficulty: medium/hard

The starting point of this information is that you already have a working install of a Linux distribution. You already have LVM set up on 1 disk, and you're using the full disk capacity in a single logical volume.

Now let's say you want a little more redundancy, and you don't have the option of creating the logical volume from scratch with Linux software RAID (which is preferable over LVM mirroring, I'll explain later). You buy a new disk with at least the same capacity as your old disk.

In this tutorial I will use the volume group name "vgdata" and logical volume name "lvdata"

Adding a new disk to an existing Volume Group

warning: I will explain adding a __3rd__ hard drive (/dev/sdc) and we will have the following settings:

  • /dev/sda is partitioned for the linux installation
  • /dev/sdb is partitioned and used for LVM for a single datavolume (/dev/vgdata/lvdata)
  • /dev/sdc is to be added to volume group vgdata to make sure there is enough space for mirroring

After physically adding the disk we need to add it to the volume group. login as root list your current partitions with:

fdisk -l

now let's create a partition on the 3rd hard drive:

fdisk /dev/sdc => you will enter a menu
n => create new partition
p => make partition primary
1 => this is the partition number, tap <enter> twice to create a partition  that fills your complete hard drive
t => this is the filesystem type, choose or type 8e for linux extended (LV)
w => write changes to disk

now we need to create a physical volume:

pvcreate /dev/sdc1

next we need to add this physical volume to the volume group vgdata:

vgextend vgdata /dev/sdc1

with the vgdisplay command you can see the details of your volumegroup:

vgdisplay -v vgdata

You should see that you have the added disk space in the volume group. Now to get to the mirroring:

lvconvert -m1 --mirrorlog core /dev/vgdata/lvdata

=> this will create a mirror with 1 spare disk (-m1) with the mirrorlog kept in memory (--mirrorlog core) on the selected volume group. The process will (depending on your distro) display it's progress on your terminal.

If you have a large volume this will take a while (my 1.6TB on 2 primary SATA2 disks with 2 SATA2 spares took around 8 hours to complete).

If you still want to monitor the progress (or state) of the LVM mirror you can issue the following command:

lvs -ao +devices

which should give you a similar output to this:

LV                VG     Attr   LSize Origin Snap%  Move Log Copy%  Convert Devices                              
lvdata            vgdata mwi-ao 1.0T                        100.00           lvdata_mimage_0(0),lvdata_mimage_1(0)
 [lvdata_mimage_0] vgdata iwi-ao 1.0T                                        /dev/sdb1(0)                         
 [lvdata_mimage_1] vgdata iwi-ao 1.0T                                       /dev/sdc1(0)

The Copy% column will display the % of data that is mirrored, in the above case this is 100%.

Now if one of the disks fails LVM will silently convert the mirrored volume to a so called linear (normal) volume so your data will still be accessible.