Mint 21 upgrade from 20.3 stalls - problem with mdadm [SOLVED]

Questions about Grub, UEFI,the liveCD and the installer
Forum rules
Before you post read how to get help. Topics in this forum are automatically closed 6 months after creation.
Locked
Prufrock
Level 1
Level 1
Posts: 22
Joined: Mon Nov 28, 2016 8:03 am
Location: Yorkshire, UK
Contact:

Mint 21 upgrade from 20.3 stalls - problem with mdadm [SOLVED]

Post by Prufrock »

My attempts to carry out the latest upgrade from 20.3 to 21 keep stalling with an error which seems to be caused by mdadm. I have had complaints about mdadm whenever I have carried out software upgrades for Mint 20, but have been unable to work out how to address the apparent issue. As the RAID system continues to work fine, I'm afraid I've previously just ignored those messages.

The RAID consists of 2 x 2TB drives containing my documents. Linux (the only OS) is on a separate SSD, and there are a couple of additional drives operating as stand-alone storage.

Here's what I hope is the relevant section of the Terminal messages. Can anyone help with a way to address the error messages and complete the upgrade?

Code: Select all

Setting up mdadm (4.1-5ubuntu1.2) ...
dpkg: error processing package mdadm (--configure):
 installed mdadm package post-installation script subprocess returned error exit status 20
Errors were encountered while processing:
 mdadm
E: Sub-process /usr/bin/dpkg returned an error code (1)
Error - Return code: 100
Error detected on try #5...
dpkg --configure -a
Setting up mdadm (4.1-5ubuntu1.2) ...
dpkg: error processing package mdadm (--configure):
 installed mdadm package post-installation script subprocess returned error exit status 20
Errors were encountered while processing:
 mdadm
Error - Return code: 1
DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get install -fyq
Reading package lists...
Building dependency tree...
Reading state information...
0 upgraded, 0 newly installed, 0 to remove and 1600 not upgraded.
    1 not fully installed or removed.
                                                                                                  After this operation, 0 B of additional disk space will be used.
                                                               Setting up mdadm (4.1-5ubuntu1.2) ...
dpkg: error processing package mdadm (--configure):
 installed mdadm package post-installation script subprocess returned error exit status 20
Errors were encountered while processing:
 mdadm
E: Sub-process /usr/bin/dpkg returned an error code (1)
Error - Return code: 100
--------------------------

Here's some system information:

Code: Select all

Kernel: 5.4.0-144-generic x86_64 bits: 64 compiler: gcc v: 9.4.0 Desktop: Cinnamon 5.2.7
    tk: GTK 3.24.20 wm: muffin dm: LightDM Distro: Linux Mint 20.3 Una base: Ubuntu 20.04 focal
Machine: Type: Desktop System: Gigabyte product: N/A v: N/A serial: <superuser required> Chassis: type: 3
    serial: <superuser required>
  Mobo: Gigabyte model: 990XA-UD3 v: x.x serial: <superuser required> UEFI: American Megatrends
    v: FD date: 02/04/2013
CPU:  Info: quad core model: AMD Phenom II X4 955 bits: 64 type: MCP arch: K10 rev: 3 cache:
    L1: 512 KiB L2: 2 MiB L3: 6 MiB
  Speed (MHz): avg: 1450 high: 2100 min/max: 800/3200 boost: disabled cores: 1: 800 2: 2100
    3: 800 4: 2100 bogomips: 25717
  Flags: ht lm nx pae sse sse2 sse3 sse4a svm
[b]RAID:
  Device-1: md0 type: mdraid level: mirror status: active size: 1.82 TiB
  Info: report: 2/2 UU blocks: 1953382400 chunk-size: N/A super-blocks: 1.2
  Components: Online: 0: sdb1 1: sdc1[/b]
Drives:
  Local Storage: total: 4.72 TiB used: 1.9 TiB (40.2%)
  ID-1: /dev/sda vendor: Samsung model: SSD 750 EVO 120GB size: 111.79 GiB speed: 6.0 Gb/s
    serial: <filter>
  ID-2: /dev/sdb vendor: Toshiba model: DT01ACA200 size: 1.82 TiB speed: 6.0 Gb/s
    serial: <filter>
  ID-3: /dev/sdc vendor: Western Digital model: WD20EZRX-00D8PB0 size: 1.82 TiB speed: 6.0 Gb/s
    serial: <filter>
  ID-4: /dev/sdd vendor: Samsung model: HD103SJ size: 931.51 GiB speed: 3.0 Gb/s
    serial: <filter>
  ID-5: /dev/sde vendor: Kingston model: SV100S264G size: 59.63 GiB speed: <unknown>
    serial: <filter>
Partition:
  ID-1: / size: 108.98 GiB used: 35.49 GiB (32.6%) fs: ext4 dev: /dev/sda2
  ID-2: /boot/efi size: 511 MiB used: 6.1 MiB (1.2%) fs: vfat dev: /dev/sda1
  ID-3: /home size: 1.79 TiB used: 1.4 TiB (78.1%) fs: ext4 dev: /dev/md0
Swap:
  ID-1: swap-1 type: file size: 2 GiB used: 1.41 GiB (70.3%) priority: -2 file: /swapfile
Thanks for any advice!
Last edited by LockBot on Sat Sep 23, 2023 10:00 pm, edited 2 times in total.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
Currently using Mint 21.1, Cinnamon Edition
Prufrock
Level 1
Level 1
Posts: 22
Joined: Mon Nov 28, 2016 8:03 am
Location: Yorkshire, UK
Contact:

Re: Mint 21 upgrade from 20.3 stalls - problem with mdadm [SOLVED]

Post by Prufrock »

Quick update to say that I've managed to overcome this by repeatedly running the Update to Mint 21 process. I realised that the update was in fact partially installed when I saw a long list of software updates and the System Information saying I was in mint 21, even though that was obviously far from complete.

There are still a few matters to address - mdadm still says
installed mdadm package post-installation script subprocess returned error exit status 20
But at least the system is running and I can search for answers for what I hope are minor issues.

Thanks for reading!
Currently using Mint 21.1, Cinnamon Edition
User avatar
Jo-con-Ël
Level 11
Level 11
Posts: 3595
Joined: Sun Jun 20, 2021 12:41 pm
Location: donde habita el olvido

Re: Mint 21 upgrade from 20.3 stalls - problem with mdadm

Post by Jo-con-Ël »

Post back following codes results

Code: Select all

inxi -PDRxxx
lsblk -f
cat /etc/fstab
dpkg -l | grep 'raid\|mdadm'
journalctl -xb | grep raid
It looks like a BIOS RAID and most problably you also need mdadm as dependency of mdraid modules/plugin installed by kernel.
RAID:
Device-1: md0 type: mdraid level: mirror status: active size: 1.82 TiB
Info: report: 2/2 UU blocks: 1953382400 chunk-size: N/A super-blocks: 1.2
Components: Online: 0: sdb1 1: sdc1
I do not use mdadm (I think it is not needed at all in my case) to manage my hardware RAID, mdraid is enought but because of several errors on last codes results I installed libblockdev-mdraid2 as it was needed by udisks and mdadm was intalled as dependency.
Arrieritos semos y en el camino nos encontraremos.
Prufrock
Level 1
Level 1
Posts: 22
Joined: Mon Nov 28, 2016 8:03 am
Location: Yorkshire, UK
Contact:

Re: Mint 21 upgrade from 20.3 stalls - problem with mdadm [SOLVED]

Post by Prufrock »

Thanks for the reply.

I didn't think that this was set up as a hardware raid - all the setup was in Mint. I'm out of my depth in most of this detail, I'm afraid!
Here's the output you requested:

Code: Select all

$inxi -PDRxxx
RAID:
  Supported mdraid levels: raid1 linear multipath raid0 raid6 raid5 raid4
  raid10
  Device-1: md0 type: mdraid level: mirror status: active size: 1.82 TiB
  Info: report: 2/2 UU blocks: 1953382400 chunk-size: N/A super-blocks: 1.2
  Components: Online: 0: sdc1 1: sdd1
Drives:
  Local Storage: total: 4.72 TiB used: 1.92 TiB (40.7%)
  ID-1: /dev/sda vendor: Kingston model: SV100S264G size: 59.63 GiB
    speed: <unknown> type: SSD serial: 64GB80097512 rev: 225a scheme: MBR
  ID-2: /dev/sdb vendor: Samsung model: SSD 750 EVO 120GB size: 111.79 GiB
    speed: 6.0 Gb/s type: SSD serial: S33MNB0H630557W rev: 1B6Q scheme: GPT
  ID-3: /dev/sdc vendor: Toshiba model: DT01ACA200 size: 1.82 TiB
    speed: 6.0 Gb/s type: HDD rpm: 7200 serial: 86U76GYGS rev: ABB0 scheme: GPT
  ID-4: /dev/sdd vendor: Western Digital model: WD20EZRX-00D8PB0
    size: 1.82 TiB speed: 6.0 Gb/s type: HDD rpm: 5400 serial: WD-WCC4M0323580
    rev: 0A80 scheme: GPT
  ID-5: /dev/sde vendor: Samsung model: HD103SJ size: 931.51 GiB
    speed: 3.0 Gb/s type: HDD rpm: 7200 serial: S246JD2B500729 rev: 0001
    scheme: GPT
Partition:
  ID-1: / size: 108.98 GiB used: 52.12 GiB (47.8%) fs: ext4 dev: /dev/sdb2
  ID-2: /boot/efi size: 511 MiB used: 6.1 MiB (1.2%) fs: vfat
    dev: /dev/sdb1
  ID-3: /home size: 1.79 TiB used: 1.4 TiB (78.3%) fs: ext4 dev: /dev/md0

Code: Select all

$ lsblk -f
NAME FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sda
└─sda1
     ext4   1.0   Timeshift
                        8279ee03-891f-46e4-9137-ee2aefa131f7
sdb
├─sdb1
│    vfat   FAT32       9991-EB10                             504.9M     1% /boot/efi
└─sdb2
     ext4   1.0         85116e87-c80c-4a17-96f8-5f4372b00f5e   51.3G    48% /
sdc
└─sdc1
     linux_ 1.2   Dickens:0
                        ae01a880-0785-0a50-86bc-f7a4575fd16d
  └─md0
     ext4   1.0         687be70b-f023-4e52-9791-63f40484f63d  305.5G    78% /home
sdd
└─sdd1
     linux_ 1.2   Dickens:0
                        ae01a880-0785-0a50-86bc-f7a4575fd16d
  └─md0
     ext4   1.0         687be70b-f023-4e52-9791-63f40484f63d  305.5G    78% /home
sde
└─sde1
     ext4   1.0         d73aaa32-6214-4c9c-af03-2edeecda3804    393G    52% /mnt/d73aaa32-6214-4c9c-af03-2edeecda3804
sdf
sdg
sdh
sdi
sdj
sr0

Code: Select all

$ cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sdb2 during installation
UUID=85116e87-c80c-4a17-96f8-5f4372b00f5e /               ext4    errors=remount-ro 0       1
# /boot/efi was on /dev/sdb1 during installation
UUID=9991-EB10  /boot/efi       vfat    umask=0077      0       1
/swapfile                                 none            swap    sw              0       0

#Locate /home on RAID1
# Next line is the version that worked on Mint 19.3 and AT FIRST on Mint 20
UUID=687be70b-f023-4e52-9791-63f40484f63d   /home    ext4          nodev,nosuid       0       2
/dev/disk/by-uuid/d73aaa32-6214-4c9c-af03-2edeecda3804 /mnt/d73aaa32-6214-4c9c-af03-2edeecda3804 auto nosuid,nodev,nofail,x-gvfs-show,x-gvfs-name=Documents2 0 0

Code: Select all

$ dpkg -l | grep 'raid\|mdadm'
ii  dmraid                                        1.0.0.rc16-10ubuntu2                       amd64        Device-Mapper Software RAID support tool
ii  libdmraid1.0.0.rc16:amd64                     1.0.0.rc16-10ubuntu2                       amd64        Device-Mapper Software RAID support tool - shared library
iF  mdadm                                         4.2-0ubuntu1                               amd64        Tool to administer Linux MD arrays (software RAID)

Code: Select all

tom@Dickens ~ $ journalctl -xb | grep raid
Mar 26 14:49:54 Dickens kernel: md/raid1:md0: active with 2 out of 2 mirrors
Mar 26 14:49:54 Dickens kernel: raid6: sse2x4   gen()  5520 MB/s
Mar 26 14:49:54 Dickens kernel: raid6: sse2x4   xor()  1948 MB/s
Mar 26 14:49:54 Dickens kernel: raid6: sse2x2   gen()  8967 MB/s
Mar 26 14:49:54 Dickens kernel: raid6: sse2x2   xor()  7574 MB/s
Mar 26 14:49:54 Dickens kernel: raid6: sse2x1   gen()  7177 MB/s
Mar 26 14:49:54 Dickens kernel: raid6: sse2x1   xor()  4934 MB/s
Mar 26 14:49:54 Dickens kernel: raid6: using algorithm sse2x2 gen() 8967 MB/s
Mar 26 14:49:54 Dickens kernel: raid6: .... xor() 7574 MB/s, rmw enabled
Mar 26 14:49:54 Dickens kernel: raid6: using intx1 recovery algorithm
Mar 26 14:49:55 Dickens udisksd[1278]: failed to load module mdraid: libbd_mdraid.so.2: cannot open shared object file: No such file or directory
Mar 26 14:49:55 Dickens udisksd[1278]: Failed to load the 'mdraid' libblockdev plugin
Last edited by SMG on Sun Mar 26, 2023 5:18 pm, edited 1 time in total.
Reason: Changed quote tags to code tags to retain the formatting of the output.
Currently using Mint 21.1, Cinnamon Edition
User avatar
zcot
Level 9
Level 9
Posts: 2838
Joined: Wed Oct 19, 2016 6:08 pm

Re: Mint 21 upgrade from 20.3 stalls - problem with mdadm [SOLVED]

Post by zcot »

I'm a little weak on the deep details, but...

and I'm not sure how this can apply to your situation, but maybe any of it does:

Using that type of setup, the general fresh installation would go like this:
1. run the live session environment
2. install the mdadm stuff and build the disk design
3. run the installer to make the installation
4. finish the installer and stay in the live session environment
5. setup a chroot into the newly installed system,
6. install the missing mdadm package and rebuilt the initramfs.
7. reboot into the fully functioning new system. :wink:

otherwise you have to go back to a live session environment to chroot and inject the raid package and rebuild the kernel init.

Where does that stand for you? I don't know though. :D
User avatar
Jo-con-Ël
Level 11
Level 11
Posts: 3595
Joined: Sun Jun 20, 2021 12:41 pm
Location: donde habita el olvido

Re: Mint 21 upgrade from 20.3 stalls - problem with mdadm [SOLVED]

Post by Jo-con-Ël »

Prufrock wrote: Sun Mar 26, 2023 4:51 pm I didn't think that this was set up as a hardware raid - all the setup was in Mint.
It looks like is both fake raid (dmraid) and pure software raid (mdadm) are activated. Check RAID on BIOS setup to be sure and following codes results

Code: Select all

sudo dmraid -n | grep dirty 
sudo dmraid -s
sudo mdadm --misc --detail /dev/md[012]
cat /proc/mdstat
To work only with mdadm maybe it is woth avoid dmraid completely adding nodmraid to kernel line.

On the other hand as everything is working but that fail with mdadm package you can run apt install libblockdev-mdraid2 and then sudo dpkg-reconfigure mdadm.
Arrieritos semos y en el camino nos encontraremos.
Prufrock
Level 1
Level 1
Posts: 22
Joined: Mon Nov 28, 2016 8:03 am
Location: Yorkshire, UK
Contact:

Re: Mint 21 upgrade from 20.3 stalls - problem with mdadm [SOLVED]

Post by Prufrock »

Hi: thanks for staying with this, and for your advice.
Here are the results from those commands, which appear to confirm there is no hardware RAID. However, unfortunately the results don't seem to solve the issue with mdadm:

Code: Select all

tom@Dickens ~ $ sudo dmraid -n | grep dirty 
     
tom@Dickens ~ $ sudo dmraid -s
no raid disks

tom@Dickens ~ $ sudo mdadm --misc --detail /dev/md[012]
/dev/md0:
           Version : 1.2
     Creation Time : Sat Nov  5 12:33:37 2016
        Raid Level : raid1
        Array Size : 1953382400 (1862.89 GiB 2000.26 GB)
     Used Dev Size : 1953382400 (1862.89 GiB 2000.26 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri Mar 31 11:59:17 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : Dickens:0  (local to host Dickens)
              UUID : ae01a880:07850a50:86bcf7a4:575fd16d
            Events : 100699

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1

tom@Dickens ~ $ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc1[0] sdd1[1]
      1953382400 blocks super 1.2 [2/2] [UU]
      bitmap: 3/15 pages [12KB], 65536KB chunk
unused devices: <none>

tom@Dickens ~ $ apt install libblockdev-mdraid2
       
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages were automatically installed and are no longer required:
  libfluidsynth2 libllvm12:i386 libofa0 libsrt1 libusrsctp1 linux-headers-5.4.0-139
  linux-headers-5.4.0-139-generic linux-headers-5.4.0-144 linux-headers-5.4.0-144-generic
  linux-image-5.4.0-139-generic linux-image-5.4.0-144-generic linux-modules-5.4.0-139-generic
  linux-modules-5.4.0-144-generic linux-modules-extra-5.4.0-139-generic
  linux-modules-extra-5.4.0-144-generic mint-backgrounds-una mint-backgrounds-vanessa
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  libbytesize-common libbytesize1
The following NEW packages will be installed
  libblockdev-mdraid2 libbytesize-common libbytesize1
0 to upgrade, 3 to newly install, 0 to remove and 7 not to upgrade.
1 not fully installed or removed.
Need to get 31.4 kB of archives.
After this operation, 207 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libbytesize-common all 2.6-1 [7,454 B]
Get:2 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libbytesize1 amd64 2.6-1 [12.1 kB]
Get:3 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libblockdev-mdraid2 amd64 2.26-1 [11.8 kB]
Fetched 31.4 kB in 1s (27.2 kB/s)              
Selecting previously unselected package libbytesize-common.
(Reading database ... 586779 files and directories currently installed.)
Preparing to unpack .../libbytesize-common_2.6-1_all.deb ...
Unpacking libbytesize-common (2.6-1) ...
Selecting previously unselected package libbytesize1:amd64.
Preparing to unpack .../libbytesize1_2.6-1_amd64.deb ...
Unpacking libbytesize1:amd64 (2.6-1) ...
Selecting previously unselected package libblockdev-mdraid2:amd64.
Preparing to unpack .../libblockdev-mdraid2_2.26-1_amd64.deb ...
Unpacking libblockdev-mdraid2:amd64 (2.26-1) ...
Setting up libbytesize-common (2.6-1) ...
Setting up libbytesize1:amd64 (2.6-1) ...
Setting up mdadm (4.2-0ubuntu1) ...
dpkg: error processing package mdadm (--configure):
 installed mdadm package post-installation script subprocess returned error exit status 128
dpkg: dependency problems prevent configuration of libblockdev-mdraid2:amd64:
 libblockdev-mdraid2:amd64 depends on mdadm (>= 3.3.2); however:
  Package mdadm is not configured yet.

dpkg: error processing package libblockdev-mdraid2:amd64 (--configure):
 dependency problems - leaving unconfigured
Processing triggers for libc-bin (2.35-0ubuntu3.1) ...
Errors were encountered while processing:
 mdadm
 libblockdev-mdraid2:amd64
E: Sub-process /usr/bin/dpkg returned an error code (1)

tom@Dickens ~ $ sudo dpkg-reconfigure mdadm
/usr/sbin/dpkg-reconfigure: mdadm is broken or not fully installed.
Last edited by SMG on Fri Mar 31, 2023 9:41 am, edited 1 time in total.
Reason: Changed quote tags to code tags. Code tags preserve the formatting of terminal output.
Currently using Mint 21.1, Cinnamon Edition
User avatar
Jo-con-Ël
Level 11
Level 11
Posts: 3595
Joined: Sun Jun 20, 2021 12:41 pm
Location: donde habita el olvido

Re: Mint 21 upgrade from 20.3 stalls - problem with mdadm [SOLVED]

Post by Jo-con-Ël »

Try removing dmraid completely from system

Code: Select all

apt purge dmraid libdmraid1.0.0.rc16:amd64 libblockdev-mdraid2:amd64and
avoid changing with kernel.

In case of problem with mdadm package half configured try apt install -f

Then add nodmraid parameter to kernel line
Edit grub file

Code: Select all

xed admin:///etc/default/grub
add parameter as follows
GRUB_CMDLINE_LINUX_DEFAULT="quite splash nodmraid"
Save changes, close editor and run

Code: Select all

sudo update-grub

Then reboot.
If mdadm package is still half configured try apt install -f again. If is not enouth to solve the problem, you would need to remove and reinstall that packages, but is activated and md0 is mounted on /home.... If you have already activated you root account (i.e. if you have run sudo passwd before) you can try it from root session.

If root account is not activated you better do it booting Linux live USB and chroot as zcot said before.
Arrieritos semos y en el camino nos encontraremos.
Locked

Return to “Installation & Boot”