[HOWTO] Install Mint 21.x / Ubuntu 22.04 onto an mdadm software RAID

Write tutorials for Linux Mint here
More tutorials on https://github.com/orgs/linuxmint/discu ... /tutorials and (archive) on https://community.linuxmint.com/tutorial
Forum rules
Don't add support questions to tutorials; start your own topic in the appropriate sub-forum instead. Before you post read forum rules
Post Reply
rene
Level 20
Level 20
Posts: 12240
Joined: Sun Mar 27, 2016 6:58 pm

[HOWTO] Install Mint 21.x / Ubuntu 22.04 onto an mdadm software RAID

Post by rene »

Installing Mint 21.x (or its underlying Ubuntu 22.04) onto an mdadm software RAID is not possible through the standard installer but easily done after creating the RAID manually. In short you...

1. Partition your drives to have (on an EFI system, the ESP and) /boot outside of the RAID, and create the RAID manually.
2. Point the installer to the created partitions/filesystems via its "Something else" option.
3. Before rebooting chroot into the just installed system to install mdadm and have it regenerate the initramfs.

In more detail you...

1a. Partition your drives.

We will assume two same SATA drives to be configured in RAID0 on a UEFI/GPT system. For e.g. NVMe you just replace say /dev/sda and /dev/sdb with respectively /dev/nvme0n1 and /dev/nvme1n1, for e.g. RAID1 rather than RAID0 you just use --level=1 rather than 0 in the below, and for a legacy rather than UEFI install you just ignore all the stuff about an ESP.

It should however be noted that generally it's fairly important to use same drives for any type of RAID. Timing differences between on the hardware or even just firmware level different devices can otherwise cause drives to frequently fall from the array (as a RAIDed together set of drives is called) and have you deal with frequent rebuilds. If it works, it works, but generally same drives with same firmwares is highly recommended -- and it is in the context of a drives' error recovery time in fact advisable to use drives that are specifically intended to be RAIDed up such as in the case of HDDs WD's Red line. For SSDs this is probably less important.

Also, before we get going a small interlude on chunk sizes. The below uses a 16K chunk-size for a Linux system partition, where "a chunk-size" is defined as the I/O-size to individual drives in the array. I.e., for RAID0 specifically a chunk-size of 16K means that the first 16K of an I/O request is written to / read from drive 1 before another 16K to/from drive 2 and so on. You don't want this chunk-size to be too small so as to take advantage of higher sequential than random I/O-speeds, nor too big lest you basically for a large percentage of I/O nullify the RAID0. On an SSD missing seek-times mean that the mentioned sequential/random difference is much less pronounced than on HDD and you'd lean towards smaller chunk-sizes, at least for RAID0. On HDDs and especially HDDs used in a "data storage" rather than "system" capacity, i.e., large and sequential rather than small and random, you'd lean towards bigger.

My personal rule of thumb is to use a 16K chunk-size on SSD and "system" type usage, 128K for HDD and "data storage" type usage, but beware that this subject can be bike-shedded for the rest of your natural existence. Some people swear you need at least 64K, others will on SSD find little use for anything other than 4K. In the end it boils down to individual usage patterns and little can be said generally -- other than, seemingly, what I am saying here and which is that I find 16K to be a nice size for an SSD-based system type RAID0 and 128K or even 256K for an HDD-based storage type one (RAID1 is for scaredy cats and I've little opinion on it...)

1b. All that said -- partition your drives.

Let's say we want to RAID up /dev/sda and /dev/sdb into a RAID0 for the system as such. After booting the Live/Installer system you open a terminal and type sudo -i to open a root shell. In a UEFI system we will need to split off the EFI System Partition (ESP) and in both a UEFI and Legacy system we will need to split off /boot. You can of course should you care to also split of /home or whatever else; you'd here create the partitions and in the next step (potentially) combine also those into an e.g. "home" MD device. We'll do just one big combined root and home RAID partition.

Use e.g. gparted if you're more comfortable with it, but I'll here show things through fdisk: we create in that order sda1 as a 512M ESP, sda2 as a (say) 7.5G /boot, and the rest as our to be RAIDed up partition. We need the RAID partition on /dev/sdb to be of the same size and that 7.5G for /boot is chosen to together with the 512M ESP total a 8G /dev/sdb1 that we could and here will use for a split off swap partition. Feel free to be as creative as you care to as long as the to be RAIDed up partitions are of the same size (and your /boot is of I'd say no less than 5G).

For the example you'd in fdisk /dev/sda use

1. g to create a new GPT table.

2. n, accept 1 partition number, accept 2048 starting sector, type +512M for the size and
3. t, accept 1, type uefi to change the type to "EFI System".

4. n, accept 2 partition number, accept starting sector, type +7.5G for the size.

5. n, accept 3 partition number, accept starting sector, accept size to end of disk, and
6. t, accept 3, type raid to change the type to "Linux RAID".

Pressing p to show the table this would have you end up with (on a 25G example drive)

Code: Select all

Device        Start      End  Sectors  Size Type
/dev/sda1      2048  1050623  1048576  512M EFI System
/dev/sda2   1050624 16779263 15728640  7.5G Linux filesystem
/dev/sda3  16779264 52428766 35649503   17G Linux RAID
Use w to write it out and quit and similarly use fdisk /dev/sdb (with swap as the GPT-type for the swap-partition) to end up with

Code: Select all

Device        Start      End  Sectors Size Type
/dev/sdb1      2048 16779263 16777216   8G Linux swap
/dev/sdb2  16779264 52428766 35649503  17G Linux RAID
As mentioned: if you are installing a BIOS/MBR rather than UEFI/GPT system just don't create an ESP, and in that case use MBR-type fd, "Linux raid auto", or perhaps better, da, "Non-FS data", for the RAID partitions in the t steps.

1c. Format the created partitions and create/format the array.

The ESP has to be a fat partition, /boot any by Grub supported filesystem and is normally simply ext4, swap, well, swap:

1. mkfs -t fat -n EFI /dev/sda1
2. mkfs -t ext4 -L boot /dev/sda2
3. mkswap -L swap /dev/sdb1

We now first have to install mdadm with apt install mdadm and then create the array with

4.

Code: Select all

mdadm --create --level=0 --metadata=1.2 --homehost=any --chunk=16K --raid-devices=2 /dev/md/root /dev/sda3 /dev/sdb2
Use of course e.g. /dev/sda2 and /dev/sdb2 if you are installing a BIOS/MBR system and those are your RAID partitions. --chunk=16K was commented on above, and for a two drive RAID1 you'd use --level=1. The system should tell you mdadm: array /dev/md/root started.

5. mkfs -t ext4 -L root -b 4096 -E stride=4,stripe-width=8 /dev/md/root

The stride and stripe-width parameters are not essential but optimize the ext4 filesystem layout for the underlying two-drive RAID. "stride" should be with a 4K ext4 block-size (as is default, and as here explicitly per that -b parameter) taken to be chunk-size/block-size=16K/4K=4 and "stripe-width" as stride*number-of-data-drives=4*2=8.

At this point preparation is done. Show what device the MD device got with ls -l /dev/md/root (which will show /dev/md127 or alike) and, keeping that shell open for later,

2. Start the installer.

At the "Installation type" screen you choose "Something else" and place / on the MD device while electing to not reformat it again. Place /boot on in this example /dev/sda2 (and also do not format it again: we already did so above). The ESP and swap-partition will be handled automatically. The boot-loader location should be set and remain set to /dev/sda both for UEFI/GPT and BIOS/MBR (i.e., not explicitly /dev/sda1 in former case).

When you click "Continue" the installer will warn about the root filesystem not being formatted: this is exactly right. In MBR-mode the installer may also complain about a missing EFI partition; this is a bug in the to Mint 21 (and Ubuntu 21.10 if not mistaken) new Ubuntu installer and not to do with our RAID. Just follow the normal installation procedure and at the end elect to "Continue testing" since we need to install mdadm onto the just installed system manually.

3. Install mdadm onto the just installed system and (automatically) regenerate the initramfs.

For this we need to chroot into the system. I.e., say, with the device names as in this example and from that same root shell as before,

Code: Select all

# mount /dev/md/root /mnt
# mount /dev/sda2 /mnt/boot
# mount /dev/sda1 /mnt/boot/efi
# mount --bind {,/mnt}/dev
# mount --bind {,/mnt}/dev/pts
# mount --bind {,/mnt}/proc
# mount --bind {,/mnt}/sys
# mount --bind {,/mnt}/sys/firmware/efivars
# mount --bind {,/mnt}/run
# chroot /mnt apt install mdadm
This should automatically create /mnt/etc/mdadm/mdadm.conf with the correct information and regenerate the initramfs.

At this point you can reboot and should find yourself in your just installed RAID install of Mint 21. Note that we labeled the ESP as "EFI", /boot as "boot", the rootfs as "root" and the swap partition as "swap". That is, if you care to you can manually edit the system's /etc/fstab to mount them via those labels rather than via the UUIDs which the installer has chosen to do.

This has been tested with Mint 21.1 Xfce.
Last edited by rene on Sat Feb 11, 2023 7:21 am, edited 2 times in total.
User avatar
SMG
Level 25
Level 25
Posts: 31333
Joined: Sun Jul 26, 2020 6:15 pm
Location: USA

Re: [HOWTO] Install Mint 21.x / Ubuntu 22.04 onto an mdadm software RAID

Post by SMG »

Moderator note: GeoffinOz's implementation of the tutorial can now be found here Installing Mint 21.x onto an mdadm software RAID5.
Image
A woman typing on a laptop with LM20.3 Cinnamon.
ameer
Level 1
Level 1
Posts: 1
Joined: Mon Mar 18, 2024 9:39 pm

Re: [HOWTO] Install Mint 21.x / Ubuntu 22.04 onto an mdadm software RAID

Post by ameer »

Thank you for this post. It was very thorough, clear and concise, all at the same time!

Everything worked out smoothly except for the one command you had near the end:
mount --bind {,/mnt}/sys/firmware/efivars
It gave the error:
mount: /mnt/sys/firmware/efivars: mount point does not exist.

Having looked at the directory, there are files for both /sys/firmware/efi/vars and /sys/firmware/efi/efivars but no /sys/firmware/efivars.
After a quick google search, people seem to mount it to /sys/firmware/efi/efivars.

So, I modified the line and ran this instead:
mount --bind {,/mnt}/sys/firmware/efi/efivars
Everything else I kept the same. It seems to have worked well. Was I in the right with my logic?
Post Reply

Return to “Tutorials”