RAID 1
Installing Amahi with RAID 1 in 2HDD setup. See this other page for RAID5.
Also see another users setup of #RAID 1 with 3 HDDs.
If you perform either of these setups and have trouble with one of the members of the RAID1 array not joining on reboot, try this possible solution. #Losing drive on reboot
________________________________________________________________________
So, to set the scene. I'm pretty much totally new to Linux and I'm just an enthusiastic amateur when it comes to computers in general.
I'm setting up Amahi to use as a server for home and and home office for a new business I'm setting up. So it'll mainly store music and film files for home use and documents for work which will be accessed through a VPN. I may also use it to host a CRM system later.
I wanted to record this partly for myself, so when I realise I've gone wrong and have to do it all again I can see where, but also as my way of contributing. I've relied so much on others putting info up on fora, wikis etc. I hope this will be of use to someone else.
What I'm using:
- Router with internet connection providing DHCP, connected to
- Server (in my case an HP Proliant ML110 G5; Intel Xeon Dual Core 3065 @ 2.33 GHz; 3RG RAM)
- 2 x 250GB Hard disk drives (in the server)
- Keyboard and mouse attached to server
- Fedora 10 DVD
- Laptop to read instructions from (and type out these notes)
- Registration details from the Amahi site.
Sites I'm using for info:
- http://www.ping.co.il/node/1/ - how to set up RAID with Disk Druid
- http://docs.fedoraproject.org/install-guide/f10/en_US/ch-disk-partitioning.html#sn-partitioning-raid - guide for installing Fedora 10
- http://forums.fedoraforum.org/archive/index.php/t-323.html - advice on partition sizes
- http://www.amahi.org/support/instructions - Instructions for installing Amahi
- http://www.linuxsa.org.au/tips/disk-partitioning.html - explains which partitions are used for what.
- http://orangespike.ca/?q=node/34 - more help with Fedora RAID
N.B. This installation is to install a completely fresh system with no dual-boot. All data on the drives will be wiped.
Installing Fedora 10
- Insert the Fedora disk into the server's DVD drive.
- Turn the server on.
- Select 'install or upgrade an existing system'.
- When asked, test the media (DVD) - just to make sure you won't get any surprises with it hanging on a dodgy disk.
- When the Screen comes up, click 'Next'.
- Chose your language.
- Chose your keyboard.
- Set a name for your computer and domain, or leave this as the default; click 'Next'.
- Select your time zone; click 'Next'.
- Set a root password (make a note of it); click 'Next'. Creating RAID partitions
- Select 'create custom layout'; click 'Next'.
- If your drives are already partitioned, delete all the various partitions, by selecting them and clicking 'delete'.
- Click 'RAID'.
- Select 'Create a software RAID partition.'; click 'OK'.
- Set the following options:
- File system type: Software RAID
- Allowable drives: tick the first (in my case 'sda')
- Size: 100 MB (300MB for Fedora 12)
- Additional size options: Fixed size
- Force to be primary partition: tick
- Click 'OK'.
- Repeat step 15, but in 15.2 tick the second drive (sdb)
- Click 'RAID'.
- Select 'Create a RAID device'; click 'OK'.
- Set the following options:
- Mount point: /boot
- File system type: ext3
- RAID device: md0
- RAID level: Level 1
- Raid members: tick both drives
- Click 'OK'.
- Select the 'Free space' on your first hard drive.
- Click 'New'.
- Set the following options:
- Mount point: [just leave this]
- File System Type: swap
- Allowable drives: tick the first
- Size: 2 x RAM (in my case 2 x 3GB = 6,000MB)
- Additional size options: Fixed size
- Click 'OK'
- Repeat step 22, but ticking the second drive in 22.3
- To set up each of the extra partitions on the drive repeat steps 15 to 19 (once for each new partition). These are the changes to make each time:
- Change sizes in 15.3 The sizes I have used are:
- /boot 100MB (300MB for Fedora 12)
- /usr 20000MB
- / 8000MB
- /var 5000MB
- /tmp 5000MB
- /home 10000MB
- /var/hda/files (what was left - this is where Amahi stores all your shared folders)
- Do not force to be primary partition in 15.5
- Use whatever RAID device is the default in 19.3
- Click 'Next'
- Click 'write changes to disk'. THIS WILL WIPE ALL YOUR DATA ON YOUR DISKS. Completing Fedora install
- When asked, tick 'Install boot loader on /dev/md0'; Click 'Next'.
- Deselect Office and Productivity, unless you want it (e.g. to simultaneously use your HDA as a desktop)
- Click on the "Add additional software repositories" button. If asked, enable your network interface.
- Add the Amahi repository with the following information:
- Repository name: amahi
- Repository URL: http://f10.amahi.org (http://f12.amahi.org for Fedora 12)
- I've also added the additional repository shown to me:
- Installation Repo
- Proceed by clicking "Next" to the completion of the Install.
- Wait while all that downloads and installs.
- Take the Fedora DVD out of the drive.
- For Fedora 12 we need to make our /boot partitions bootable in fdisk
- Press Ctrl+Alt+Shift+F2 on your keyboard to get a shell prompt
- Type fdisk /dev/sda followed by the Enter key
- Type p followed by the Enter key
- Type a followed by the Enter key
- Type the number of your first linux raid autodetect partition followed by the Enter key
- Type w followed by the Enter key
- Repeat for /dev/sdb
- Click 'Reboot'.
- Follow the on screen instructions.
- Create a new (non-root) user, giving a password (write it down); Click 'Forward'.
- Set the date and time; Click 'Forward'.
- Select 'send profile'; Click 'Finish'.
- Log in using the username and password you created in step 37. Install Amahi
- Open a terminal (Applications > Sytem Tools >Terminal).
- Type the following commands (press [Enter] after each):
- su -
- [your root password]
- hda-install [the code Amahi gave you]
- If SELinux throws up any errors (shows what looks like a sheriff's star in the top bar, double-click it) read each one and follow the instructions.
- Reboot (may not be necessary - but I was having trouble so decided to reboot after each stage to figure out what the problem was).
- Log in.
- Open a terminal window.
- Type the following commands (press [Enter] after each):
- su -
- [your root password]
- grub
- device (hd0) /dev/sda
- root (hd0,0)
- setup (hd0)
- device (hd1) /dev/sdb
- root (hd1,0)
- setup (hd1)
- Close the terminal window. Check RAID is working
- You can test whether RAID is working by shutting down the server, disconnecting on of the drives and seeing whether it boots when you turn it back on. If it does, shut it down again, disconnect the other drive and reconnect the first one and try to boot from that drive. If it boots from both, everything is ok.
- Log back in to your system and check for and install any other updates (go to System > Administration > Update System then follow the instructions to install the updates you want).
- Once everything has updated, reboot the server.
- Once you've logged back in to Fedora, open another terminal.
- Become root, by typing:
- su -
- [enter your root password]
- You can now check your RAID drives by typing:
- mdadm --detail /dev/md0 [then add another /dev/md... for each extra RAID you set up - so for me I went from md0 to md5]
- You can then set up monitoring so that it will email you if anything goes wrong by typing:
- mdadm --monitor --scan --mail=you@domain.com delay=3600 --daemonise /dev/md0 /dev/md1 /dev/md2 /dev/md3 /dev/md4 /dev/md5
- THIS STEP (56) HAS BEEN CAUSING MY COMPUTER TO HANG DURING BOOT, SO I HAVEN'T IMPLEMENTED IT IN THE END. You may choose to use it (hopefully someone with more understanding than me will see what I've got wrong and change this page to corect it). To ensure that this is running whenever you start the server apparently this should be put into /etc/rc.local. You can do this by entering:
- nano /etc/rc.local
- In the window that opens add the following line at the end: mdadm --monitor --scan --mail=you@domain.com delay=3600 --daemonise /dev/md0 /dev/md1 /dev/md2 /dev/md3 /dev/md4 /dev/md5
- Save and Exit nano
- Reboot
- Now you can go to http://hda to set up your Amahi server.
Make both HDDs bootable (so if one fails you have an exact copy - that's the point of RAID isn't it?)
Set up monitoring of RAID
I hope this helps, but please feel free to correct anything I've got wrong. I've really been feeling my way here, so I can't guarantee that any of the above will work or work seamlessly, but I hope it will.
RAID 1 with 3 HDDs
I'm another user who's also using RAID 1and wanted to give back a bit to the community, so I thought I'd describe my setup.
I have 3 HDDs: sda is a 40GB sdb and sdc are 1.5 TB each
I reinstalled everything to create the RAID 1 array and to update my system to Fedora 12 (so make a backup of everything first!). With all three drives installed, during the partitioning stage of the Fedora install I did the following (see steps 11-19 above for more details):
sda - formatted with a / partition and swap
sdb - click on RAID, create a software RAID partition, made it the maximum size of the disk
sdc - same as for sdb
I then created a single RAID 1 volume using sdb and sdc and mounted it as /var/hda/files
With this done, you proceed with the rest of the Fedora/Amahi instalation as per the Amahi documentation.
With this setup, all my personal files and media will be stored on my RAID 1 array, while the Fedora and Amahi filesystems and programs will be on sda primary drive. This allows you to update Fedora and/or Amahi with much less risk to your personal files.
Please add any other comments/corrections/suggestions as you see fit, and thanks to the first user who started this section and inspired to me ccontribute as well.
Losing drive on reboot
If you perform either of the above setups and find that on reboot one member of the array is not present and the command "mdadm --detail /dev/md0" shows that the array is degraded, the following may help.
In a Fedora forum thread - http://www.fedoraforum.org/forum/showthread.php?t=255048 it was found that some makes of hard drives can cause this problem as they take a while to spin up so the system ignores one of them and carries on with the degraded array.
This topic describes LVM (Logical Volume Management) mirroring in Linux in easy to follow steps.
difficulty: medium/hard
The starting point of this information is that you already have a working install of a Linux distribution. You already have LVM set up on 1 disk, and you're using the full disk capacity in a single logical volume.
Now let's say you want a little more redundancy, and you don't have the option of creating the logical volume from scratch with Linux software RAID (which is preferable over LVM mirroring, I'll explain later). You buy a new disk with at least the same capacity as your old disk.
In this tutorial I will use the volume group name "vgdata" and logical volume name "lvdata"
Adding a new disk to an existing Volume Group
warning: I will explain adding a __3rd__ hard drive (/dev/sdc) and we will have the following settings:
- /dev/sda is partitioned for the linux installation
- /dev/sdb is partitioned and used for LVM for a single datavolume (/dev/vgdata/lvdata)
- /dev/sdc is to be added to volume group vgdata to make sure there is enough space for mirroring
After physically adding the disk we need to add it to the volume group. login as root list your current partitions with:
fdisk -l
now let's create a partition on the 3rd hard drive:
fdisk /dev/sdc => you will enter a menu n => create new partition p => make partition primary 1 => this is the partition number, tap <enter> twice to create a partition that fills your complete hard drive t => this is the filesystem type, choose or type 8e for linux extended (LV) w => write changes to disk
now we need to create a physical volume:
pvcreate /dev/sdc1
next we need to add this physical volume to the volume group vgdata:
vgextend vgdata /dev/sdc1
with the vgdisplay command you can see the details of your volumegroup:
vgdisplay -v vgdata
You should see that you have the added disk space in the volume group. Now to get to the mirroring:
lvconvert -m1 --mirrorlog core /dev/vgdata/lvdata
=> this will create a mirror with 1 spare disk (-m1) with the mirrorlog kept in memory (--mirrorlog core) on the selected volume group. The process will (depending on your distro) display it's progress on your terminal.
If you have a large volume this will take a while (my 1.6TB on 2 primary SATA2 disks with 2 SATA2 spares took around 8 hours to complete).
If you still want to monitor the progress (or state) of the LVM mirror you can issue the following command:
lvs -ao +devices
which should give you a similar output to this:
LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices lvdata vgdata mwi-ao 1.0T 100.00 lvdata_mimage_0(0),lvdata_mimage_1(0) [lvdata_mimage_0] vgdata iwi-ao 1.0T /dev/sdb1(0) [lvdata_mimage_1] vgdata iwi-ao 1.0T /dev/sdc1(0)
The Copy% column will display the % of data that is mirrored, in the above case this is 100%.
Now if one of the disks fails LVM will silently convert the mirrored volume to a so called linear (normal) volume so your data will still be accessible.