[SATLUG] LVM+RAID System Consolidation Tale....
j at jvpappas.net
Wed Jun 3 10:50:01 CDT 2009
I thought that I would relay my tale of the last few days of a consolidation
of a couple of systems (My home TV system + Home NAS/VM Server) that is
ultimately a LVM and SW Raid exercise.
So here is the situation:
I had 2 separate systems that I wanted to consolidate (both use LVM, server
had SW Raid):
1. My SageTV HTPC system with a 200G OS drive and 2x1TB Data drives along
with encoder card
2. My Home server with 300GB Mirrored (R1+LVM) OS and 4x500GB Data
(R5+LVM) with 8 Drive cage slots (2 open slots, conveniently enough)
So the layout:
- sdb - 1TB - 1 LVM partition (VG: VData)
- sdc - 1TB - 2 Partitions, 500G split, (VG: MData)
- LVs - VData: Media, tv2, tv3, Sage-Backup - MData: Video, Transcode
- Home Server
- RAID Layout - md0 = boot; md1 = root; md2 = LVM VG: Data; md3 = swap
- md3 : active raid1 sdc2 sde2
2008000 blocks [2/2] [UU]
- md2 : active raid5 sdd1 sdf1 sdb1 sda1
1465151808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
- md0 : active raid1 sdc1 sde1
136448 blocks [2/2] [UU]
- md1 : active raid1 sdc3 sde3
290904896 blocks [2/2] [UU]
- VG Data LV: export, iso, ftp, et cetra
1. Power down both systems (If I did not have to move the PCI TV Tuner,
would have only had to shut down TV due to future IP addr conflict, SATA is
hot-swappable, but unmounting volumes is still wise before hot-swap)
2. Install transferred hardware to home server
3. Power on in single user mode (Better control of services and disks)
4. Reconfigure network info, configure services, verify mounts, etc
5. `vgmerge` Vdata and Data VG's
6. Go into runlevel 3 since VG's are now combined as required.
7. `pvmove` MData so that all LV's on second partition, `vgreduce` and
`pvremove` first partition.
8. fdisk first partiton to be type 'fd' (Linux raid autodetect)
9. `mdadmin --add /dev/md2 /dev/sdl1` to add new partition as "hot spare"
10. `mdadmin --grow -n 5 /dev/md2` to move from 4 disk Raid5 to 5 disk
Raid 5 using just added hot-spare. md2 is now 1.8GiB with ~900GiB free. cat
/proc/mdstat to verify Raid:
1. md2 : active raid5 sdl1 sdd1 sdf1 sdb1 sda1
1465151808 blocks super 0.91 level 5, 64k chunk, algorithm 2
[=======>.............] reshape = 35.7% (174522176/488383936)
2. Yep takes a really long time - Good thing this is an online
operation - (600 mins = 10 Hours) and runs CPU (2Ghz Ath64x1) at
~25% to do
the XOR calculations to layout parity.
11. Now start evacuating sdk to md2:
1. `pvmove -n media /dev/sdk1 /dev/md2` - Takes about 1 hr per 100GB,
in this case 4 hours
2. `pvmove -n tv2 /dev/sdk1 /dev/md2` - Takes another 4 hours
3. `pvmove -n tv3 /dev/sdk1 /dev/md2` - Only 2 hours this time, as
This is the point where I am now. I am going to execute the following steps
once step 11.3 above is done.
1. Now that sdk is evacuated, I have an empty 1TB sdk (the other 1TB
drive - sdl - is 50% md2 and 50% LVM VG data2.
2. I will probably take 50% of sdk to add to md2 so that md2 would be a
6x500GB R5 array (will take about 18hrs to "reshape" the now 2.2TiB R5 array
and move the volumes from sdl2 to md2.
That will leave 500GB sd[kl]2 partitions empty. I am not sure ultimately
what I will do with those, but I will probably mirror them (/dev/md4) and
move the VMware Server stuff (SYS:VM-/vm) to md4.
That is it for now, please share any thoughts!
More information about the SATLUG