edit: System: LinuxMint18.3, 1xSSD with root, 3xHDD for raid5
I had a raid5 setup on top of 3 lvms containing one disk each (yes intentionally, as I originally had 2 disks in each lvm). After replacing each lvm with the physical volume (steps: degrade array by removing one lvm disk, remove vg and lv on this disk, and re-adding partition1 of the physical disk back to the raid; wait until re-sync completes; repeat for next disk) everything in the raid worked fine until I had to reboot for the first time.
After reboot the raid5 is gone (i.e. the /dev/md entry is missing and I found no way so far to get it back), and each of the 3 previous raid disks' partition1 is showing up as an LVM2_member FSTYPE, not as a linux_raid_member as I would have expected. Also, there is no superblock to be found on any of the disks/partitions1. I fail to understand how I was able to add the partition/disk to the raid this way at all (and it worked).
/etc/mdadm/mdadm.conf is still correctly showing the raid, but without superblocks there is no chance to try to mdadm --assemble it back. cat /proc/mdstat also is empty.
I am still shying away from mdadm --creating the raid5 again in the fear of destroying the data on it and before I go down this road as a last resort, I am hoping for any useful pointers from any of you to get this fixed instead.
- Shouldn't mdadm --add have set up the disk to correctly show up as linux_raid_member? If not, shouldn't it have refused to accept the disk?
- Why is there no superblock to be found?
- Any ideas how to get the raid5 back without destroying the data? The data must still be there since it worked fine before the reboot.