Convert an existing 2 disk RAID 1 to a 4 disk RAID 10

By | December 7, 2015

When you want to have a secure set-up for your data storage having only two disks available a LVM over a RAID 1 set-up looks like the best solution. When an upgrade is necessary then having two separate RAID 1 based LVMs is no longer so appealing.

To keep the same reliability of a disk mirror configuration given by RAID 1  but still having a single LVM for the data storage the solution is to convert the existing two-disk RAID 1 to a four-disk RAID 10.

I was inspired by several tutorials from the web and the best one was https://www.burgundywall.com/post/convert-raid1-to-raid10-with-lvm and http://www.duntuk.com/how-install-new-drive-linux-larger-2tb-proper-alignment

Starting from the above I am going to describe my steps in a more detailed way, in a number step by step approach.

STEP 1: Original setup

The original setup consist of a LVM over a RAID 1 software raid mounted on nas1 (my first storage server) under /media/storage.

[root@nas1 ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sde1[0]
4883637248 blocks super 1.2 [2/2] [UU]
bitmap: 0/37 pages [0KB], 65536KB chunk

STEP 2: Back-up
Very important, even if with this procedure I tried to avoid having to create everything from scratch a full back-up is always very usefull. My back-up consisted of a rsync of all the data from nas1(my first storage server) storage server to nas2
(my second storage server). Lucky for me there was enough space there for the 1.9TB of data. As an alternative one or more external USB drives can be used.

[root@nas1 ~]# rsync -avh -e ssh --delete /media/storage/ root@nas2.voina.org:/media/storage/backup-nas1

STEP 3: Preapare the new drives.
As you can see the old drives that were part of the original RAID 1 are sdb and sde with the primary partitions sdb1 and sde1 as part of the raid.
The new drives were assigned by the system as /dev/sdc and /dev/sdd.
Because they are 5TB WD RED drives so bigger that 2TB we cannot use fdisk we need to use parted.

[root@nas1 ~]# parted /dev/sdc
mklabel gpt
unit TB
mkpart primary 0 100%
quit

After this set of commands we ended up with the following:

[root@nas1 /]# fdisk /dev/sdc

Welcome to fdisk (util-linux 2.25.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p
Disk /dev/sdc: 4.6 TiB, 5000981078016 bytes, 9767541168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F5FCDF48-2CAD-452D-9B7B-E26A1686A0BB

Device     Start        End    Sectors  Size Type
/dev/sdc1   2048 9767540735 9767538688  4.6T Linux filesystem

Repeat the same for /dev/sdd

[root@nas1 ~]# parted /dev/sdd
mklabel gpt
unit TB
mkpart primary 0 100%
quit

After this set of commands we ended up with the following:

[root@nas1 /]# fdisk /dev/sdd

Welcome to fdisk (util-linux 2.25.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p
Disk /dev/sdd: 4.6 TiB, 5000981078016 bytes, 9767541168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 24B59B5C-0F6E-4BB4-980B-32D9D3A61CB9

Device     Start        End    Sectors  Size Type
/dev/sdd1   2048 9767540735 9767538688  4.6T Linux filesystem

STEP 4: Create the new RAID 10

After the sdc and sdd were partitioned I created the new RAID 10, having two missing devices. Note that the missing devices are the even position devices.

[root@nas1 /]# mdadm -v --create new_raid --level=raid10 --raid-devices=4 /dev/sdc1 missing /dev/sdd1 missing

Examine the new created raid and add an entry under /etc/mdadm.conf

[root@nas1 /]# mdadm --examine --scan
[root@nas1 /]# vi /etc/mdadm.conf

As a result my mdadm configuration file ended up looking like:

[root@nas1 /]# cat /etc/mdadm.conf
ARRAY /dev/md0 metadata=1.2 name=nas1.voina.org:0 UUID=ea835246:635f9396:a0ff166e:a5b98bd5
ARRAY /dev/md/new_raid metadata=1.2 UUID=e84a5db7:d2beaded:df43c468:a28d59e1 name=nas1.voina.org:new_raid

Note that I still have the entry of the old RAID there. That entry will be removed latter after that old RAID 1 will be deleted.


STEP 5: Add the new raid to the existing physical volume.

Identify the names of the existing physical volumes and logical volumes.

[root@nas1 ~]# pvscan
PV /dev/sda2 VG fedora_localhost lvm2 [223.08 GiB / 0 free]
PV /dev/md0 VG lvm-5T lvm2 [4.55 TiB / 0 free]
Total: 2 [4.77 TiB] / in use: 2 [4.77 TiB] / in no VG: 0 [0 ]

[root@nas1 ~]# lvscan
ACTIVE '/dev/fedora_localhost/swap' [2.02 GiB] inherit
ACTIVE '/dev/fedora_localhost/home' [171.06 GiB] inherit
ACTIVE '/dev/fedora_localhost/root' [50.00 GiB] inherit
ACTIVE '/dev/lvm-5T/lvm0' [4.55 TiB] inherit

Create a physical volume on the new created raid device then extend the existing volume group (lvm-5T in my case) with the new raid device. The last command prints the result.

[root@nas1 /]# pvcreate /dev/md/new_raid
[root@nas1 /]# vgextend lvm-5T /dev/md/new_raid
[root@nas1 /]# pvs -o+pv_used

STEP 6: Move data from old array to the new array

This is a time consuming step. It took in my case around 500 minutes.

[root@nas1 /]# pvmove -i1 -v /dev/md0 /dev/md/new_raid

STEP 7: Remove the old raid from the existing physical volume

After step 6 was executed I ended up with all my data on an actual RAID 0. Note that no mirror drive (the odd drives) are added in the new RAID 10. Lucky for me I have the back-up so if something goes wrong at this step my data is safe.
Now I started to get rid of the old raid.
First remove the old raid from the physical volume (lvm-5T in my case).

[root@nas1 /]# vgreduce lvm-5T /dev/md0
Removed "/dev/md0" from volume group "lvm-5T"

STEP 8: Remove one disk from the old raid and add it to the new raid

I started to remove one by one the disks from the old raid.

First fail and remove one of the disks from the old raid /dev/md0

[root@nas1 /]# mdadm /dev/md0 --fail /dev/sde1 --remove /dev/sde1

Check now that the old raid is now formed of only one disk.

[root@nas1 /]# cat /proc/mdstat

Stop the old raid /dev/md0

[root@nas1 /]# mdadm --stop /dev/md0

Remove the /dev/md0 entry from /etc/mdadm.conf

[root@nas1 /]# vi /etc/mdadm.conf
[root@nas1 /]# cat /etc/mdadm.conf
ARRAY /dev/md/new_raid metadata=1.2 UUID=e84a5db7:d2beaded:df43c468:a28d59e1 name=nas1.voina.org:new_raid

Clear the superblock of the failed disk from the old raid. This will remove the membership of this drive to the old raid.

[root@nas1 /]# mdadm --zero-superblock /dev/sde1

Add the disk removed from the old raid to the new raid. Then check the progress of this operation. This is a time consuming step that took around 500 min.

[root@nas1 /]# mdadm /dev/md/new_raid --add /dev/sde1
[root@nas1 /]# cat /proc/mdstat

STEP 9: Remove the last disk from the old raid and add it to the new raid

After the first disk from the old raid was successfully added to the new raid do the same last steps for the last disk.

Clear the superblock of the last disk from the old raid. This will remove the membership of this drive to the old raid.

[root@nas1 /]# mdadm --zero-superblock /dev/sdb1

Add the disk removed from the old raid to the new raid. Then check the progress of this operation. This is a time consuming step that took around 500 min.

[root@nas1 /]# mdadm /dev/md/new_raid --add /dev/sdb1
[root@nas1 /]# cat /proc/mdstat

STEP 10: Resize the logical volume to fill the new raid
At this step we are going to use the new space to extend the storage mounted under /media/storage. Note that this will be done with the resource up and running so no downtime for any application using the resource is needed. Operation of my OwnCloud server using the /media/storage was not one second interrupted.

Check the logical volume situation at this point.

[root@nas1 ~]# lvscan
ACTIVE '/dev/fedora_localhost/swap' [2.02 GiB] inherit
ACTIVE '/dev/fedora_localhost/home' [171.06 GiB] inherit
ACTIVE '/dev/fedora_localhost/root' [50.00 GiB] inherit
ACTIVE '/dev/lvm-5T/lvm0' [4.55 TiB] inherit

Use lvresize command to extend the logical volume to fill the entire 10TB physical volume.

[root@nas1 /]# lvresize -l +100%FREE /dev/lvm-5T/lvm0

Extend The file system to fill the entire logical volume.

[root@nas1 /]# resize2fs /dev/lvm-5T/lvm0

STEP 11: Check the result of the conversion from RAID 1 to RAID 10

[root@nas1 /]# cat /proc/mdstat
Personalities : [raid1] [raid10]
md127 : active raid10 sdb1[5] sde1[4] sdc1[0] sdd1[2]
9767276544 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
bitmap: 0/73 pages [0KB], 65536KB chunk

unused devices:
[root@nas1 /]# pvscan
PV /dev/sda2 VG fedora_localhost lvm2 [223.08 GiB / 0 free]
PV /dev/md127 VG lvm-5T lvm2 [9.10 TiB / 0 free]
Total: 2 [9.31 TiB] / in use: 2 [9.31 TiB] / in no VG: 0 [0 ]
[root@nas1 /]# lvscan
ACTIVE '/dev/fedora_localhost/swap' [2.02 GiB] inherit
ACTIVE '/dev/fedora_localhost/home' [171.06 GiB] inherit
ACTIVE '/dev/fedora_localhost/root' [50.00 GiB] inherit
ACTIVE '/dev/lvm-5T/lvm0' [9.10 TiB] inherit
[root@nas1 /]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.9G 0 7.9G 0% /dev
tmpfs 7.9G 364K 7.9G 1% /dev/shm
tmpfs 7.9G 1012K 7.9G 1% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/mapper/fedora_localhost-root 50G 25G 23G 52% /
tmpfs 7.9G 2.2M 7.9G 1% /tmp
/dev/sda1 477M 150M 298M 34% /boot
/dev/mapper/fedora_localhost-home 169G 2.0G 158G 2% /home
tmpfs 1.6G 36K 1.6G 1% /run/user/1000
/dev/mapper/lvm--5T-lvm0 9.1T 1.9T 6.8T 22% /media/storage
tmpfs 1.6G 0 1.6G 0% /run/user/0

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.