Hi!
I have found similar topic here:
viewtopic.php?f=46&t=264661
I have second disk (not bootable) with one partition and mounted to /home. I bought another disk (so 3rd in the comp), the same model and size. Would simply creating array like in the link above work? Will data be copied to new empty disk or should I clone sdb to sdc first? Do I have to change links for mounting /home (not using lvm)? Should I do this operation from LiveCD boot?
Setup: LM 19.1., sda has EFI and / partitions, sdb with /home. I have backup of sdb on external disk.
Set Up RAID1 using (mdadm) BTRFS on Existing Drive with data [SOLVED]
Forum rules
Before you post read how to get help. Topics in this forum are automatically closed 6 months after creation.
Before you post read how to get help. Topics in this forum are automatically closed 6 months after creation.
Set Up RAID1 using (mdadm) BTRFS on Existing Drive with data [SOLVED]
Last edited by LockBot on Wed Dec 28, 2022 7:16 am, edited 3 times in total.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
Re: Set Up RAID1 using mdadm on Existing Drive with data
The mdadm route needs a lot of patience and console-fu to set up... it's not been popularised as an easy GUI operation.
What happens when a mdadm drive fails? More patience, console-fu and naughty words.
They invented BTRFS to make the world a happier place...
o The volumes can be mounted independently.
o Working drives can rebuild the gap left by a failed drive.
o Scrub, balance and repair always operate while the machine is online.
Suggested recipe for a BTRFS RAID1
Here I am using SDB as the existing data drive and SDC as the new empty drive.
### Format your new empty drive as a BTRFS device.
sudo mkfs.btrfs /dev/SDC
### Mount the volume with data in it.
mkdir -p old ; sudo mount -o ro /dev/SDB old
### Mount the new empty volume.
mkdir -p new ; sudo mount -o rw /dev/SDC new
### Duplicate your data in the BTRFS volume.
sudo cp -vau old/* new/ ; sync
### Unmount the old volume and attach it as a BTRFS volume.
sudo umount old ; rmdir old ; sudo btrfs device add /dev/SDB new
### Balance the two drives so there is a copy of every file on each one.
sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 new
###EOF###
Now...
+ Changes to the contents of new are duplicated, one in each drive.
+ Storage errors are corrected automatically using the duplicate copy.
+ You can explicitly look for storage errors using
+ If SDC fails physically you can mount SDB and attach a new SDC.
+ If SDB fails physically you can mount SDC and attach a new SDB.
- BTRFS does not fail gracefully if it runs out of unallocated storage to do its tricks... avoid stuffing the RAID full of data... leave a few % of the space free.
- It's awkward if the OS tries to boot from a failed RAID component. The data is safe enough but you really need a Live Session boot to do repairs.
- There's a lot less field-testing history for BTRFS. It's not 'experimental' any more (it's in the Linux kernel) but some of the advanced features are in beta-test... they are still working on compressed folders, transparent encryption and RAID5 for example.
What happens when a mdadm drive fails? More patience, console-fu and naughty words.
They invented BTRFS to make the world a happier place...
o The volumes can be mounted independently.
o Working drives can rebuild the gap left by a failed drive.
o Scrub, balance and repair always operate while the machine is online.
Suggested recipe for a BTRFS RAID1
Here I am using SDB as the existing data drive and SDC as the new empty drive.
### Format your new empty drive as a BTRFS device.
sudo mkfs.btrfs /dev/SDC
### Mount the volume with data in it.
mkdir -p old ; sudo mount -o ro /dev/SDB old
### Mount the new empty volume.
mkdir -p new ; sudo mount -o rw /dev/SDC new
### Duplicate your data in the BTRFS volume.
sudo cp -vau old/* new/ ; sync
### Unmount the old volume and attach it as a BTRFS volume.
sudo umount old ; rmdir old ; sudo btrfs device add /dev/SDB new
### Balance the two drives so there is a copy of every file on each one.
sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 new
###EOF###
Now...
+ Changes to the contents of new are duplicated, one in each drive.
+ Storage errors are corrected automatically using the duplicate copy.
+ You can explicitly look for storage errors using
btrfs scrub start new
+ If SDC fails physically you can mount SDB and attach a new SDC.
+ If SDB fails physically you can mount SDC and attach a new SDB.
- BTRFS does not fail gracefully if it runs out of unallocated storage to do its tricks... avoid stuffing the RAID full of data... leave a few % of the space free.
- It's awkward if the OS tries to boot from a failed RAID component. The data is safe enough but you really need a Live Session boot to do repairs.
- There's a lot less field-testing history for BTRFS. It's not 'experimental' any more (it's in the Linux kernel) but some of the advanced features are in beta-test... they are still working on compressed folders, transparent encryption and RAID5 for example.
Re: Set Up RAID1 using mdadm on Existing Drive with data
Thank you for thorough reply. Since this is my working laptop I will perform the operation when time permits (preferably over this weekend). Will report back. First I need to read more about BTRFS which I already considered before install just to be sure I understand all consequences.
-
- Level 20
- Posts: 12341
- Joined: Sun Aug 09, 2015 10:00 am
Re: Set Up RAID1 using mdadm on Existing Drive with data
I hope you have backed up your data before trying this.
If I have helped you solve a problem, please add [SOLVED] to your first post title, it helps other users looking for help.
Regards,
Deepak
Mint 21.1 Cinnamon 64 bit with AMD A6 / 8GB
Mint 21.1 Cinnamon AMD Ryzen3500U/8gb
Regards,
Deepak
Mint 21.1 Cinnamon 64 bit with AMD A6 / 8GB
Mint 21.1 Cinnamon AMD Ryzen3500U/8gb
Re: Set Up RAID1 using mdadm on Existing Drive with data
Successfully completed raid-1 with instructions above. I modified things a little bit since I needed this for /home mount.
Reference (so I can quote directly):
[1] Above instructions by Mute Ant (90% of solution)
[2] https://samwedge.uk/posts/mounting-linu ... ent-drive/
[3] viewtopic.php?t=256398
My disks were:
/dev/sda -> mounted to /home
/dev/sdb -> new disk, unmounted
/dev/sdc -> boot disk with efi and root partition
1) Backup all data (not kidding, had a few scares on the way - luckily solved them all without using backup but you never know)
2) Prepare new disk:
3) Prepare new mount points:
4) Mount new disk to /home_new
I used instructions from [2] since I wanted to be sure fstab is updated correctly but I wanted to leave this to GUI. I used Disks (under Accessories) and mounted new disk under /home_new with UUID under "Identify as".
5) Copy all files:
6) Edit /etc/fstab
My /etc/fstab looked like:
Change the mount points for both disks:
Reboot so /home will be mounted to new disk. Check if everything is OK. This is the last point you can be sure old data is correctly copied to new disk (old disk is now at /home_old, new disk is hopefully at /home).
If everything is ok then unmount /dev/sda:
7) Prepare old disk
Using -f because old disk has data and partition. Please check there are no other partitions on this disk because they will be gone.
Create raid-1 with both disks. This will take some time depending on the amount of data.
8) Check raid and make sure /etc/fstab is correct
Check UUID of new raid disk:
I my case I got this info:
That means raid-1 array got UUID from new disk. Edit /etc/fstab again and comment out or delete old disk line (I commented it out just in case I need UUID from old disk any time in the future). I got this after edit:
Reboot and hope everything works as expected.
Reference (so I can quote directly):
[1] Above instructions by Mute Ant (90% of solution)
[2] https://samwedge.uk/posts/mounting-linu ... ent-drive/
[3] viewtopic.php?t=256398
My disks were:
/dev/sda -> mounted to /home
/dev/sdb -> new disk, unmounted
/dev/sdc -> boot disk with efi and root partition
1) Backup all data (not kidding, had a few scares on the way - luckily solved them all without using backup but you never know)
2) Prepare new disk:
Code: Select all
sudo mkfs.btrfs /dev/sdb
Code: Select all
sudo mkdir /home_new
sudo mkdir /home_old
I used instructions from [2] since I wanted to be sure fstab is updated correctly but I wanted to leave this to GUI. I used Disks (under Accessories) and mounted new disk under /home_new with UUID under "Identify as".
5) Copy all files:
Code: Select all
sudo cp -vaux /home/* /home_new ; sync
Code: Select all
sudo xed /etc/fstab
Code: Select all
UUID=4f0272c8-7036-4aff-9864-73725556083c / ext4 errors=remount-ro 0 1
UUID=1A17-CC0C /boot/efi vfat umask=0077 0 1
UUID=8f729102-9ce2-410d-be26-586473b40ddb /home ext4 defaults 0 2
/swapfile none swap sw 0 0
UUID=b6998c2e-7be1-435a-bb06-6dc747b2d8c0 /home_new btrfs defaults 0 2
Code: Select all
UUID=4f0272c8-7036-4aff-9864-73725556083c / ext4 errors=remount-ro 0 1
UUID=1A17-CC0C /boot/efi vfat umask=0077 0 1
UUID=8f729102-9ce2-410d-be26-586473b40ddb /home_old ext4 defaults 0 2
/swapfile none swap sw 0 0
UUID=b6998c2e-7be1-435a-bb06-6dc747b2d8c0 /home btrfs defaults 0 2
If everything is ok then unmount /dev/sda:
Code: Select all
sudo umount /home_old
Using -f because old disk has data and partition. Please check there are no other partitions on this disk because they will be gone.
Code: Select all
sudo btrfs device add -f /dev/sda /home
Code: Select all
sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /home
Check UUID of new raid disk:
Code: Select all
sudo btrfs filesystem show
Code: Select all
Label: none uuid: b6998c2e-7be1-435a-bb06-6dc747b2d8c0
Total devices 2 FS bytes used 123.35GiB
devid 1 size 465.76GiB used 124.03GiB path /dev/sdb
devid 2 size 465.76GiB used 124.03GiB path /dev/sda
Code: Select all
UUID=4f0272c8-7036-4aff-9864-73725556083c / ext4 errors=remount-ro 0 1
UUID=1A17-CC0C /boot/efi vfat umask=0077 0 1
# UUID=8f729102-9ce2-410d-be26-586473b40ddb /home_old btrfs defaults 0 2
/swapfile none swap sw 0 0
UUID=b6998c2e-7be1-435a-bb06-6dc747b2d8c0 /home btrfs defaults 0 2