[SOLVED] mdadm grown RAID 5 partition misreporting capacity

Forum rules
Before you post read how to get help. Topics in this forum are automatically closed 6 months after creation.
Locked
DVScuba
Level 1
Level 1
Posts: 36
Joined: Sun Jan 20, 2019 12:19 pm
Location: Kitchener, ON

[SOLVED] mdadm grown RAID 5 partition misreporting capacity

Post by DVScuba »

This one is weird to me. After searching I don't see this problem listed.

I left Windows 10 and Storage Spaces to move exclusively to Linux. Having lots of media content on an external HD enclosure with 4x 3TB drives under storage spaces. I won't go into detail of all the file moves and drive removals from Storage Spaces, but essentially I built a raid5 external HD enclosure using software (mdadm) under Linux. The original /dev was built with 3x 3TB drives giving me ~6TB of storage. Once built, I copied over the content of a 4th 3TB drive to the raid and, using mdadm again, grew the raid5 to include the 4th drive (about 35 hours to complete). This gave me all my content on a 9TB storage device with lots of space for more content. Or so I thought...

I see a 9TB device when I show details in mdadm and Disks reports a 9TB ext4 partition also. The output of df -h shows the 6TB total as does checking properties (right click > properties). See the screenshot I've attached, and here's the output of mdadm:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~$ sudo mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Jan 18 18:25:58 2019
Raid Level : raid5
Array Size : 8790398976 (8383.18 GiB 9001.37 GB)
Used Dev Size : 2930132992 (2794.39 GiB 3000.46 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sat Feb 23 13:02:49 2019
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Consistency Policy : bitmap

Name : NAS-VM42:0 (local to host NAS-VM42)
UUID : 4660d257:cd3f79e3:9f887f0b:1517fdc6
Events : 14152

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 49 1 active sync /dev/sdd1
3 8 65 2 active sync /dev/sde1
4 8 33 3 active sync /dev/sdc1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Screenshot at 2019-02-23 13-10-22.png
Any ideas or thoughts to have partition size reporting correctly?

Thanks,

DV
Last edited by LockBot on Wed Dec 28, 2022 7:16 am, edited 2 times in total.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
DVScuba
Level 1
Level 1
Posts: 36
Joined: Sun Jan 20, 2019 12:19 pm
Location: Kitchener, ON

Re: mdadm grown RAID 5 partition misreporting capacity

Post by DVScuba »

Anything? Anyone?

:(
DVScuba
Level 1
Level 1
Posts: 36
Joined: Sun Jan 20, 2019 12:19 pm
Location: Kitchener, ON

Re: mdadm grown RAID 5 partition misreporting capacity

Post by DVScuba »

For anyone interested, I unmounted /dev/md0, did e2fsck -f /dev/md0, resize2fs /dev/md0 and e2fsck -f /dev/md0 again. After a remount, /dev/md0 appears to be reporting 9TB total size correctly.
Locked

Return to “Storage”