Changes

From Amahi Wiki
Jump to: navigation, search
4,196 bytes added ,  22:37, 14 September 2015
no edit summary
{{NeedsUpdate}}
 
 
Installing Amahi with RAID 1 in 2HDD setup. See this other page for [[RAID5]].
Also see another users setup of [[#RAID 1 with 3 HDDs]].
 
If you perform either of these setups and have trouble with one of the members of the RAID1 array not joining on reboot, try this possible solution. [[#Losing drive on reboot]]
 
________________________________________________________________________
So, to set the scene. I'm pretty much totally new to Linux and I'm just an enthusiastic amateur when it comes to computers in general.
Please add any other comments/corrections/suggestions as you see fit, and thanks to the first user who started this section and inspired to me ccontribute as well.
 
=<span id="reboot_fix">Losing drive on reboot</span>=
 
If you perform either of the above setups and find that on reboot one member of the array is not present and the command "mdadm --detail /dev/md0" shows that the array is degraded, the following may help.
 
In a Fedora forum thread - http://www.fedoraforum.org/forum/showthread.php?t=255048 it was found that some makes of hard drives can cause this problem as they take a while to spin up so the system ignores one of them and carries on with the degraded array.
 
To fix this, apply the recommended lines to your /etc/rc.d/rc.local file. Make sure you backup the file before doing this just in case and do this all as root.
 
 
== This topic describes LVM (Logical Volume Management) mirroring in Linux in easy to follow steps. ==
 
'''difficulty: medium/hard'''
 
The starting point of this information is that you already have a working install of a Linux distribution.
You already have LVM set up on 1 disk, and you're using the full disk capacity in a single logical volume.
 
Now let's say you want a little more redundancy, and you don't have the option of creating the logical volume from scratch with Linux software RAID (which is preferable over LVM mirroring, I'll explain later). You buy a new disk with at least the same capacity as your old disk.
 
In this tutorial I will use the volume group name "vgdata" and logical volume name "lvdata"
 
=== Adding a new disk to an existing Volume Group ===
warning: I will explain adding a __3rd__ hard drive (/dev/sdc) and we will have the following settings:
* /dev/sda is partitioned for the linux installation
* /dev/sdb is partitioned and used for LVM for a single datavolume (/dev/vgdata/lvdata)
* /dev/sdc is to be added to volume group vgdata to make sure there is enough space for mirroring
 
After physically adding the disk we need to add it to the volume group.
login as root
list your current partitions with:
fdisk -l
now let's create a partition on the 3rd hard drive:
fdisk /dev/sdc => you will enter a menu
n => create new partition
p => make partition primary
1 => this is the partition number, tap <enter> twice to create a partition that fills your complete hard drive
t => this is the filesystem type, choose or type 8e for linux extended (LV)
w => write changes to disk
 
now we need to create a physical volume:
pvcreate /dev/sdc1
next we need to add this physical volume to the volume group vgdata:
vgextend vgdata /dev/sdc1
with the vgdisplay command you can see the details of your volumegroup:
vgdisplay -v vgdata
 
You should see that you have the added disk space in the volume group.
Now to get to the mirroring:
lvconvert -m1 --mirrorlog core /dev/vgdata/lvdata
=> this will create a mirror with 1 spare disk (-m1) with the mirrorlog kept in memory (--mirrorlog core) on the selected volume group. The process will (depending on your distro) display it's progress on your terminal.
 
If you have a large volume this will take a while (my 1.6TB on 2 primary SATA2 disks with 2 SATA2 spares took around 8 hours to complete).
 
If you still want to monitor the progress (or state) of the LVM mirror you can issue the following command:
lvs -ao +devices
which should give you a similar output to this:
 
LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices
lvdata vgdata mwi-ao 1.0T 100.00 lvdata_mimage_0(0),lvdata_mimage_1(0)
[lvdata_mimage_0] vgdata iwi-ao 1.0T /dev/sdb1(0)
[lvdata_mimage_1] vgdata iwi-ao 1.0T /dev/sdc1(0)
 
The Copy% column will display the % of data that is mirrored, in the above case this is 100%.
 
Now if one of the disks fails LVM will silently convert the mirrored volume to a so called linear (normal) volume so your data will still be accessible.
12,424

edits