[RESLVD] I did something I wish I hadn't. Formatted mdadm..

Questions about other topics - please check if your question fits better in another category before posting here
Forum rules
Before you post read how to get help. Topics in this forum are automatically closed 6 months after creation.
Locked
5hades0f6rey

[RESLVD] I did something I wish I hadn't. Formatted mdadm..

Post by 5hades0f6rey »

I managed to do something I really hope I can undo. In the process of trying to make an LMDE 201204 LiveUSB I formatted my 2TB RAID1 array, instead of the 2GB flash drive I intended due to a scroll wheel snafu in gparted. I used gparted because the flash drive had been partitioned and formatted (two ext2 partitions and one swap) for an Optware install for my dd-wrt flashed router. I wanted to wipe the flash drive before creating the LiveUSB. Somewhere along the way I managed to scroll to md1 and didn't notice that I was formatting the 2TB hard drive instead of the 2GB flash drive. The similarity in size of the two devices through me off. Yes, it's an order of magnitude difference, but at a glance 1.82TB and 1.87GB look similar enough to click OK before realizing you're error. The format failed and things seemed to be with the file system on md1...

However, after a reboot, I couldn't mount md1 and found that there appears to be no file system or partition table that gparted had reported there before. The odd thing is, md0 (a 500GB RAID1 array) doesn't appear in gparted, though it's members do. Not sure if that fact is significant, but I thought I should mention it.

What I'm trying to figure out is if I can some how recover the data on md1 by either undoing the format or restoring the ext4 file system or degrade the array and hope that I can recover the data on one or the other member. But I really don't know how to proceed. The information I've gleaned from the searches I've done thus far don't match my particular situation. And my lack of experience with data recovery on Linux makes me wary of attempting anything without guidance.

I'm gutted right now. I have (had?) quite a big chunk of data on that array and I really could use some help.
Last edited by LockBot on Wed Dec 28, 2022 7:16 am, edited 2 times in total.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
User avatar
tdockery97
Level 14
Level 14
Posts: 5058
Joined: Sun Jan 10, 2010 8:54 am
Location: Mt. Angel, Oregon

Re: I did something I wish I hadn't. Formatted mdadm contai

Post by tdockery97 »

I've never run into the problem, but I found this: http://superuser.com/questions/171673/h ... -harddrive

At least it's a place to start.
Mint Cinnamon 20.1
5hades0f6rey

Re: I did something I wish I hadn't. Formatted mdadm contai

Post by 5hades0f6rey »

Thanks, I'll look into the options that thread offers. But if anybody else has any bright ideas, I'd love to hear them!
mintybits

Re: I did something I wish I hadn't. Formatted mdadm contai

Post by mintybits »

Unless they have changed Gparted since I last looked it does not recognize raid arrays, only individual disks. So I imagine you formatted only one of the two md1 disks in which case you should be able to start the array degraded using the good disk.

Presumably you interrupted the formatting shortly after it had begun?

What does cat /proc/mdstat show?

What happens when you examine the superblock data on each disk? Eg: sudo mdadm -E /dev/sdx, or if you partitioned the disks before creating the array: sudo mdadm -E /dev/sdxy.

If you are lucky, one of the md1 disks will be healthy. The other needs to be "failed", if mdadm hasn't already done this, and "removed". You can then reformat the disk so it is the same as the good one again and add it back to the array. The array should assemble ok with only one disk. See: http://linux.die.net/man/8/mdadm
5hades0f6rey

Re: I did something I wish I hadn't. Formatted mdadm contai

Post by 5hades0f6rey »

mintybits wrote:Unless they have changed Gparted since I last looked it does not recognize raid arrays, only individual disks. So I imagine you formatted only one of the two md1 disks in which case you should be able to start the array degraded using the good disk.
I thought it was odd that md1 appeared as a device in gprated and not md0, but I had no idea which case was 'normal'. I'm also very surprised gparted allowed me to format a mounted device... IIRC, gparted usually forces you to unmount a file system/device before allowing a format operation.
Presumably you interrupted the formatting shortly after it had begun?
No. The format started and the progress dialog showed an error message about the format having failed almost immediately. I wish I had recorded the actual message now, as there was something about it... A specific term that I just can't recall now, but I get the feeling was pretty significant. At least it probably would have been useful to know now. I was so stunned at my misfortune and so busy trying to determine if I had corrupted the data on md1, I didn't think to copy the output or take a screen-shot before closing the progress dialog.

I thought the data on md1 was fine because I was able to access the files on it... At least until I rebooted my system after testing the LMDE LiveUSB I'd made. That's when I noticed md1 wouldn't mount. Manual attempts resulted with this.

Code: Select all

$ sudo mount /mnt/extra-storage/
mount: special device UUID=791e7baa-c6ae-47ab-bc6f-21a6f7f42648does not exist
This always worked before, even though the UUID is different from what mdadm reports for both sdb and sdc quoted below. In fact I think after I did a re-install to correct a driver issue, I fiddle with my fstab to reflect the new UUID and md1 wouldn't mount. Reverting to the UUID above 'fixed' the problem. Not sure what that's all about, but since it worked, I left it alone.
What does cat /proc/mdstat show?

Code: Select all

$ cat /proc/mdstat
Personalities : [raid1] 
md1 : active (auto-read-only) raid1 sdb[0] sdc[1]
      1953514496 blocks [2/2] [UU]
      
md0 : active raid1 sdd[0] sde[1]
      488386496 blocks [2/2] [UU]
      
unused devices: <none>
BTW, I had used mdadm to make md1 read-write for a time not realizing that auto-read-only was a sort of failsafe state. I didn't run across search results that made that clear until after I had done it. Which I hope didn't foul things up any further than they already were.
What happens when you examine the superblock data on each disk? Eg: sudo mdadm -E /dev/sdx, or if you partitioned the disks before creating the array: sudo mdadm -E /dev/sdxy.
Both mdadm arrays were partitioned after they were created.

Code: Select all

$ sudo mdadm -E /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 3143ee75:892cb7ae:ddd8d050:4859a911
  Creation Time : Mon Oct  4 20:08:10 2010
     Raid Level : raid1
  Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
     Array Size : 1953514496 (1863.02 GiB 2000.40 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 126

    Update Time : Wed May  2 20:50:06 2012
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 9abbf3bd - correct
         Events : 1605282


      Number   Major   Minor   RaidDevice State
this     0       8       16        0      active sync   /dev/sdb

   0     0       8       16        0      active sync   /dev/sdb
   1     1       8       32        1      active sync   /dev/sdc

Code: Select all

$ sudo mdadm -E /dev/sdc
/dev/sdc:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 3143ee75:892cb7ae:ddd8d050:4859a911
  Creation Time : Mon Oct  4 20:08:10 2010
     Raid Level : raid1
  Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
     Array Size : 1953514496 (1863.02 GiB 2000.40 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 126

    Update Time : Wed May  2 20:50:06 2012
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 9abbf3cf - correct
         Events : 1605282


      Number   Major   Minor   RaidDevice State
this     1       8       32        1      active sync   /dev/sdc

   0     0       8       16        0      active sync   /dev/sdb
   1     1       8       32        1      active sync   /dev/sdc
If you are lucky, one of the md1 disks will be healthy. The other needs to be "failed", if mdadm hasn't already done this, and "removed". You can then reformat the disk so it is the same as the good one again and add it back to the array. The array should assemble ok with only one disk. See: http://linux.die.net/man/8/mdadm
I really do appreciate the help. I wish I had a better understanding of these more technical aspects of the GNU/Linux stack, but alas I've primarily been a Windows user and only used Linux on and off until this last push to switch. That manual page might be helpful, but I'm out of my depth here. How do I determine which disk is "healthy" if any? Or if not "healthy" whether either disk has recoverable data? And would attempting a reassemble do more harm? I'd like to not do any more damage...


The really ironic part to this situation is that I was just about to do a full backup of th data on that array... Now I'm not sure I'll get anything back.


FYI, I think I should mention something odd I noticed while running gparted just now. Before my incident, the four disks that comprised md0 (two 500GB drives, sdd & sde) and md1 (two 2TB drives, sdb & sdc) were "unallocated" according to gparted and md1 was allocated as one large ext4 partition and md0 simply didn't appear as a selectable device. Now however, md0 appears as a selectable device and when selected is identified as /dev/md0p1 (I can provide sreenshots if requested). However, now it's members also appear to have ext4 their own partition tables with an ext4 file system and are identified as /dev/sdd1 and /dev/sde1 in the graphical representation of the partition table. What strikes me as very odd is that, as I said before, both arrays were created prior to being partitioned. Now (at least on md0) they're behaving as if the were partitioned before the array was created.

What would you (or others) make of this?
mintybits

Re: I did something I wish I hadn't. Formatted mdadm contai

Post by mintybits »

Hi. I just have a minute to reply. More later. But it looks like md1 is fine so that's great. It is just a mounting problem.
type "mount" to see what is mounted where.
Try mounting md1 somewhere, for example create a directory in /tmp and mount md1:
mkdir /tmp/raidone
sudo mount /dev/md1 /tmp/raidone
and see if that works. If it does then there is a problem with your fstab or initramfs. Please post "sudo blkid" and "cat /etc/fstab". ALso please post "cat /etc/mdadm/mdadm.conf".
5hades0f6rey

Re: I did something I wish I hadn't. Formatted mdadm contai

Post by 5hades0f6rey »

mintybits wrote:Hi. I just have a minute to reply. More later. But it looks like md1 is fine so that's great. It is just a mounting problem.
type "mount" to see what is mounted where.
Try mounting md1 somewhere, for example create a directory in /tmp and mount md1:
mkdir /tmp/raidone
sudo mount /dev/md1 /tmp/raidone
and see if that works. If it does then there is a problem with your fstab or initramfs. Please post "sudo blkid" and "cat /etc/fstab". ALso please post "cat /etc/mdadm/mdadm.conf".
You must have missed this from my previous post, but my attempt at manually mounting md1 failed.

Code: Select all

$ sudo mount /mnt/extra-storage/
mount: special device UUID=791e7baa-c6ae-47ab-bc6f-21a6f7f42648does not exist
Here are the results of blkid:

Code: Select all

$ blkid
/dev/sda1: UUID="f67c0ae5-07aa-4310-afcb-5637166b8a9b" TYPE="swap" 
/dev/sda2: UUID="c0a3f3e9-bb3f-4052-b9dd-3abe822c715f" TYPE="ext2" LABEL="boot" 
/dev/sda3: UUID="a8996562-c64e-4d71-bd1d-71d83e61ca85" TYPE="ext4" LABEL="tmp" 
/dev/sda5: LABEL="root" UUID="d5b01d40-b70d-463b-9917-88a4261ac344" TYPE="ext4" 
/dev/sda7: UUID="f31f844e-1dfc-4623-8e80-be2580c15338" TYPE="ext4" LABEL="var-log" 
/dev/sda8: LABEL="tmp_home" UUID="e2893e56-23b2-44e0-9df4-41211968ba24" TYPE="ext4" 
/dev/sda9: LABEL="opt" UUID="7d0272cc-d893-4a45-b50b-f3fcbed8ddba" TYPE="ext4" 
/dev/sdc: UUID="3143ee75-892c-b7ae-ddd8-d0504859a911" TYPE="linux_raid_member" 
/dev/sdb: UUID="3143ee75-892c-b7ae-ddd8-d0504859a911" TYPE="linux_raid_member" 
/dev/sde: UUID="cb290c18-c95f-ece7-ddd8-d0504859a911" TYPE="linux_raid_member" 
/dev/md0: UUID="b672b6f5-e85b-4430-a3d6-d5eac4027add" TYPE="ext4" 
/dev/sdd: UUID="cb290c18-c95f-ece7-ddd8-d0504859a911" TYPE="linux_raid_member"
The contents of fstab:

Code: Select all

# /etc/fstab: static file system information.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc	/proc	proc	defaults	0	0
# /dev/sda1
UUID=f67c0ae5-07aa-4310-afcb-5637166b8a9b	swap	swap	sw	0	0
# /dev/sda2
UUID=c0a3f3e9-bb3f-4052-b9dd-3abe822c715f	/boot	ext2	rw,errors=remount-ro	0	0
# /dev/sda3
UUID=a8996562-c64e-4d71-bd1d-71d83e61ca85	/tmp	ext4	rw,errors=remount-ro	0	0
# /dev/sda5
UUID=d5b01d40-b70d-463b-9917-88a4261ac344	/	ext4	rw,errors=remount-ro	0	1
# /dev/sda9
UUID=7d0272cc-d893-4a45-b50b-f3fcbed8ddba	/opt	ext4	rw,errors=remount-ro	0	1
# /dev/sda7
UUID=f31f844e-1dfc-4623-8e80-be2580c15338	/var/log	ext4	rw,errors=remount-ro	0	0

# /dev/md0 UUID=cb290c18-c95fece7-ddd8d050-4859a911
UUID=b672b6f5-e85b-4430-a3d6-d5eac4027add	/home		ext4	rw,errors=remount-ro		0	0
# /dev/md1 3143ee75-892cb7ae-ddd8d050-4859a911
UUID=791e7baa-c6ae-47ab-bc6f-21a6f7f42648	/mnt/extra-storage	ext4	rw,errors=remount-ro,noatime	0       0
#UUID=3143ee75-892cb7ae-ddd8d050-4859a911 	/mnt/extra-storage	ext4	rw,errors=remount-ro,noatime	0       0
And the contents of mdadm.conf:

Code: Select all

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 UUID=cb290c18:c95fece7:ddd8d050:4859a911
ARRAY /dev/md1 UUID=3143ee75:892cb7ae:ddd8d050:4859a911

# This file was auto-generated on Mon, 16 Jan 2012 11:46:16 -0500
# by mkconf 3.1.4-1+8efb9d1
I did discover some encouraging news. I installed TestDisk this morning and performed some preliminary tests. An analysis of md1, assuming an Intel partition table, resulted in some interesting results, It found several partition tables for ext4, NTFS-HPFS, and FAT32. I'm assuming the multiplicity of them is down to the number of redundant superblocks and md1 being a RAID array. After all, md1 is not a 2TB array with 6TB of capacity. Logically though, TestDisk might report such erroneous information if it saw md1's total capacity as sdb+sdc+md1. I also did an analysis of sdb, assuming it's "No partitioned media" and TestDisk did find an ext4 partition table during a "Quick Search". I also noticed that I could attempt to list files that I hadn't noticed during the analysis of md1. Unfortunately, TestDisk stated that, "No file found, filesystem may be damaged" on sdb. I have more time now to play with TestDisk and I'll post more after I've completed a more thorough analysis of each device.
mintybits

Re: I did something I wish I hadn't. Formatted mdadm contai

Post by mintybits »

Your mdadm.conf is missing an ARRAY statement. Do this:
sudo mdadm -Ds >> /etc/mdadm/mdadm.conf

Try to mount md1 in a temporary place:
sudo mkdir /tmp/raidone
sudo mount /dev/md1 /tmp/raidone


If it mounts ok run blkid to find the UUID of md1 and add an entry for it in fstab if you want it to mount to a particular place at boot time, such as /mnt/extra-strorage. The fstab entry should look like this:
UUID=3143ee75:892cb7ae:ddd8d050:4859a911 /mnt/extra-storage ext4 defaults 0 1

And update your initial ram filesystem so the array is started during boot:
sudo update-initramfs -u

The "Disk Utility" application does handle RAID and LVM properly so try to use this rather than Gparted.
5hades0f6rey

Re: I did something I wish I hadn't. Formatted mdadm contai

Post by 5hades0f6rey »

mintybits wrote:Your mdadm.conf is missing an ARRAY statement. Do this:
sudo mdadm -Ds >> /etc/mdadm/mdadm.conf
Are you sure? The output of mdadm -Ds is:

Code: Select all

$ sudo mdadm -Ds 
ARRAY /dev/md0 metadata=0.90 UUID=cb290c18:c95fece7:ddd8d050:4859a911
ARRAY /dev/md1 metadata=0.90 UUID=3143ee75:892cb7ae:ddd8d050:4859a911
And my mdadm.conf file has both ARRAY statements:

Code: Select all

# definitions of existing MD arrays
ARRAY /dev/md0 UUID=cb290c18:c95fece7:ddd8d050:4859a911
ARRAY /dev/md1 UUID=3143ee75:892cb7ae:ddd8d050:4859a911
Okay, the "metadata" parameter is missing, but that hasn't been a problem before.
Try to mount md1 in a temporary place:
sudo mkdir /tmp/raidone
sudo mount /dev/md1 /tmp/raidone

Code: Select all

$ sudo mount /dev/md1 /tmp/raidone
mount: you must specify the filesystem type
$ sudo mount -t ext4 /dev/md1 /tmp/raidone
mount: wrong fs type, bad option, bad superblock on /dev/md1,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

$ dmesg | tail
[54069.412333] FAT-fs (md1): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[54069.412605] FAT-fs (md1): bogus number of reserved sectors
[54069.412618] FAT-fs (md1): Can't find a valid FAT filesystem
[54092.844577] EXT4-fs (md1): VFS: Can't find ext4 filesystem
[54138.050870] EXT4-fs (md1): VFS: Can't find ext4 filesystem
[54138.076469] EXT2-fs (md1): error: can't find an ext2 filesystem on dev md1.
[54138.144273] FAT-fs (md1): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[54138.144465] FAT-fs (md1): bogus number of reserved sectors
[54138.144474] FAT-fs (md1): Can't find a valid FAT filesystem
[54145.478151] EXT4-fs (md1): VFS: Can't find ext4 filesystem
If I'm interpreting this correctly, it suggests that md1 indeed appears to have a FAT32 partition table. At least as far as mount is concerned. However, I've had some success with TestDisk. It was able to use one of the backup superblocks to read the contents of sdb and I was able to to recover a couple files. I've ordered a couple more drives so I can clone sdb (or sdc) and perform recovery operations. If I'm luck, I'll be able to re-write the ext4 partition table and mount one or the other clone with all my data intact. If not, perhaps I can recover at least some of what is there.
The "Disk Utility" application does handle RAID and LVM properly so try to use this rather than Gparted.
I'll certainly keep that in mind for the future. From the look of it, I wouldn't have been able to make the mistake I did using Disk Utility... Oh well.
mintybits

Re: I did something I wish I hadn't. Formatted mdadm contai

Post by mintybits »

My misread of your mdadm.conf output...array statements already there. Glad you know what you are doing. :wink:
The only other thing I might try is to run md1 degraded with only one device. It may be one disk is corrupted but the other isn't.
5hades0f6rey

Re: I did something I wish I hadn't. Formatted mdadm contai

Post by 5hades0f6rey »

Using TestDisk and fsck, I was able to restore the superblock from a backup and mount the clone of md1. I'm still validating the data against MD5 hash values I recorded. But so far, most of my data appears to be intact.

Thanks for the advice.
Locked

Return to “Other topics”