Raid 5 lost

Forum rules
Before you post read how to get help. Topics in this forum are automatically closed 6 months after creation.
Locked
ari0

Raid 5 lost

Post by ari0 »

Hi,
edit: System: LinuxMint18.3, 1xSSD with root, 3xHDD for raid5
I had a raid5 setup on top of 3 lvms containing one disk each (yes intentionally, as I originally had 2 disks in each lvm). After replacing each lvm with the physical volume (steps: degrade array by removing one lvm disk, remove vg and lv on this disk, and re-adding partition1 of the physical disk back to the raid; wait until re-sync completes; repeat for next disk) everything in the raid worked fine until I had to reboot for the first time.

After reboot the raid5 is gone (i.e. the /dev/md entry is missing and I found no way so far to get it back), and each of the 3 previous raid disks' partition1 is showing up as an LVM2_member FSTYPE, not as a linux_raid_member as I would have expected. Also, there is no superblock to be found on any of the disks/partitions1. I fail to understand how I was able to add the partition/disk to the raid this way at all (and it worked).
/etc/mdadm/mdadm.conf is still correctly showing the raid, but without superblocks there is no chance to try to mdadm --assemble it back. cat /proc/mdstat also is empty.
I am still shying away from mdadm --creating the raid5 again in the fear of destroying the data on it and before I go down this road as a last resort, I am hoping for any useful pointers from any of you to get this fixed instead.

Questions:
  • Shouldn't mdadm --add have set up the disk to correctly show up as linux_raid_member? If not, shouldn't it have refused to accept the disk?
  • Why is there no superblock to be found?
  • Any ideas how to get the raid5 back without destroying the data? The data must still be there since it worked fine before the reboot.
Your expertise is highly appreciated. Thank you.
Ari
Last edited by LockBot on Wed Dec 28, 2022 7:16 am, edited 1 time in total.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
User avatar
catweazel
Level 19
Level 19
Posts: 9763
Joined: Fri Oct 12, 2012 9:44 pm
Location: Australian Antarctic Territory

Re: Raid 5 lost

Post by catweazel »

ari0 wrote: Tue Aug 07, 2018 5:50 pm Your expertise is highly appreciated. Thank you.
I gave up software RAID after a major data loss and invested in a couple of cheap ($US50) 6805T hardware RAID cards from FleaBay. That said, I see you have tried mdadm --assemble but you don't mention if you tried this:

mdadm --assemble --scan

The command should scan all unused volumes for md metadata and assemble the RAID array based on what it finds.
"There is, ultimately, only one truth -- cogito, ergo sum -- everything else is an assumption." - Me, my swansong.
ari0

Re: Raid 5 lost

Post by ari0 »

catweazel wrote: Wed Aug 08, 2018 1:31 am if you tried this:
mdadm --assemble --scan
Thank you. I tried both mdadm --assemble --scan and mdadm --assemble /dev/md127 /dev/sdb1 /dev/sdc1 /dev/sdd1 without success. I understand any of these variations of mdadm will look for superblock which is missing.
User avatar
catweazel
Level 19
Level 19
Posts: 9763
Joined: Fri Oct 12, 2012 9:44 pm
Location: Australian Antarctic Territory

Re: Raid 5 lost

Post by catweazel »

ari0 wrote: Wed Aug 08, 2018 3:04 am
catweazel wrote: Wed Aug 08, 2018 1:31 am if you tried this:
mdadm --assemble --scan
Thank you. I tried both mdadm --assemble --scan and mdadm --assemble /dev/md127 /dev/sdb1 /dev/sdc1 /dev/sdd1 without success. I understand any of these variations of mdadm will look for superblock which is missing.
Unfortunately all I can suggest now is a few links:

http://paregov.net/blog/21-linux/25-how ... -is-zeroed
https://ubuntuforums.org/archive/index. ... 47275.html

The second link discusses assembling the array in degraded mode.

Good luck.
"There is, ultimately, only one truth -- cogito, ergo sum -- everything else is an assumption." - Me, my swansong.
lazarus

Re: Raid 5 lost

Post by lazarus »

One thing that might be worth trying, unlikely though it seems... check the raw device(s) for the superblock, not the partition(s).

ie. mdadm --examine /dev/sdb instead of mdadm --examine /dev/sdb1

If this shows a superblock, you at least know how it was running before reboot. :roll:

I know this can happen, as I've somehow managed it myself on one member of an array. Don't ask me how...
Locked

Return to “Storage”