1. On power up, sometimes my system boots to the login screen with no problems; other times there's a little disk activity then a blank screen, no longer how long I wait.
2. If I use <Ctrl> <Alt> <Del> from the blank screen, the system reboots and presents me with the GRUB menu, offering normal boot (the default), boot into recovery mode and memtest. I then select the default.
3. The system sometimes boots to the login screen, but more worryingly sometimes reports that one or more of the partitions (sdb1, sdc1, scd1, sde1, sdf1) in the RAID array is faulty.
4. If the system reports problems with the RAID array, I select "N" to decline booting with the dgraded RAID array - this drops me to a Busybox prompt. <Ctrl> <Alt> <Del> reboots the system, which *usually* gets me to a login screen OK.
After this performance, I find that the RAID array has switched from being /dev/md0 to /dev/md127; according to the advice I received in the exchange of posts about waking up my RAID array, this is because mdadm gets confused by the content of /etc/mdadm/mdadm.conf. I've copied the content of this file below.
Code: Select all
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 UUID=f1802834-b239-4bd0-a805-d43954b77f95
Code: Select all
sudo blkid
The most puzzling thing is that the output from sudo update-initramfs -u is:
Code: Select all
W: mdadm: the array /dev/md0 with UUID 66698c52:4c458d41:87b74864:110faf9b
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
Code: Select all
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md/TheBigOne metadata=1.2 UUID=66698c52:4c458d41:87b74864:110faf9b name=:TheBigOne
I'm not sure whether the booting problem is linked to the peculiarities with mdadm, but I have my suspicions. I should add that I doubt whether it's a hardware problem; when I was running Ubuntu 11.04, I had no problems with erratic booting or the random change from the RAID array being /dev/md0 to being /dev/md127.
Can anyone point me to which log files I should check to track down what's happening with the boot process, as a starter, please?
Any help would be much appreciated...
Ian