What caused my SSD to get ruined in Mint 21.1 install?
Forum rules
Before you post read how to get help. Topics in this forum are automatically closed 6 months after creation.
Before you post read how to get help. Topics in this forum are automatically closed 6 months after creation.
What caused my SSD to get ruined in Mint 21.1 install?
I'm not blaming Mint. In fact, it's low on my list of suspects.
I tried to install Mint 21.1 on a refurbed Dell Opti 5040 with an SDD I'd just bought. Went through a couple of installs, with and without UEFI, but there were problems with each. On the second attempt /root ran out of space during the first round of post-install of updates. I knew I couldn't live with that so without further investigating I tried installing from scratch for a third time but couldn't edit the partition table.
Couldn't edit it from the install routine, or from GParted in the live CD, or using fdisk. Couldn't create a new partition table. Put the SDD on a Win10 machine and tried Disk Management but it couldn't delete the partitions either. In each case there was never any error message, the process appeared to execute and terminate normally, but on the re-scan, nothing was changed.
I tried to wipe the disk using EaseUS and KillDisk and a hex editor run from a WinPE live CD. I tried sprinkling my keyboard with the blood of a red rooster sacrificed under a gum tree in a full moon. Nothing.
Somewhere while all this was going on I got an error,
DMAR: [Firmware Bug]: No firmware reserved region can cover this
RMR [0x000yadda-yadda], contact BIOS vendor for fixes.
I searched on that error and followed the bread crumbs to a BIOS update at Dell dated 2022. Installed the update but I still can't edit the partitions on that SDD. Attempt to install on the existing partitioning end with:
The attempt to mount a file system with type vfat at ISCSI1 (0,0,0), partition #1 (SDA) at /boot/efi failed
I'd bought two of these Dell 5040s so I had another SDD on hand, but I was leery of the combination of Mint 21.1 and that hardware, so I put the second SDD in the first box and installed 20.1 on it with a 12gb root partition. Everything went according to plan (except I can't install any Windows image in VBox, but that's a different thread).
All that to get to this. I accept that that first SDD is toast, beyond repair. What's bothering me is not knowing whether it died from natural causes (MTBF), in whcih case I figure the vendor needs to make good, whether I did something stupid to cause it (I have little experience with SSDs, less with UEFI), in which case I need to RTFM, or whether it was the buggy BIOS. I did find a lone post (at askubuntu.com) stating that this "bug" resulted in the BIOS "misinforming" the kernel. Could the BIOS have so munged the partition table as to make it useless and unrepairable?
Most of all I want to avoid being the cause of this happening again so I'm hoping someone here who's knowledgeable with SSDs and UEFI and Secure Boot can tell me whether my misuse of same might have been the cause.
As always, hoots and jeers are welcome, and if you're going to throw bottles, please empty them first (if there's on thing I can't stomach, it's wasting alcohol).
I tried to install Mint 21.1 on a refurbed Dell Opti 5040 with an SDD I'd just bought. Went through a couple of installs, with and without UEFI, but there were problems with each. On the second attempt /root ran out of space during the first round of post-install of updates. I knew I couldn't live with that so without further investigating I tried installing from scratch for a third time but couldn't edit the partition table.
Couldn't edit it from the install routine, or from GParted in the live CD, or using fdisk. Couldn't create a new partition table. Put the SDD on a Win10 machine and tried Disk Management but it couldn't delete the partitions either. In each case there was never any error message, the process appeared to execute and terminate normally, but on the re-scan, nothing was changed.
I tried to wipe the disk using EaseUS and KillDisk and a hex editor run from a WinPE live CD. I tried sprinkling my keyboard with the blood of a red rooster sacrificed under a gum tree in a full moon. Nothing.
Somewhere while all this was going on I got an error,
DMAR: [Firmware Bug]: No firmware reserved region can cover this
RMR [0x000yadda-yadda], contact BIOS vendor for fixes.
I searched on that error and followed the bread crumbs to a BIOS update at Dell dated 2022. Installed the update but I still can't edit the partitions on that SDD. Attempt to install on the existing partitioning end with:
The attempt to mount a file system with type vfat at ISCSI1 (0,0,0), partition #1 (SDA) at /boot/efi failed
I'd bought two of these Dell 5040s so I had another SDD on hand, but I was leery of the combination of Mint 21.1 and that hardware, so I put the second SDD in the first box and installed 20.1 on it with a 12gb root partition. Everything went according to plan (except I can't install any Windows image in VBox, but that's a different thread).
All that to get to this. I accept that that first SDD is toast, beyond repair. What's bothering me is not knowing whether it died from natural causes (MTBF), in whcih case I figure the vendor needs to make good, whether I did something stupid to cause it (I have little experience with SSDs, less with UEFI), in which case I need to RTFM, or whether it was the buggy BIOS. I did find a lone post (at askubuntu.com) stating that this "bug" resulted in the BIOS "misinforming" the kernel. Could the BIOS have so munged the partition table as to make it useless and unrepairable?
Most of all I want to avoid being the cause of this happening again so I'm hoping someone here who's knowledgeable with SSDs and UEFI and Secure Boot can tell me whether my misuse of same might have been the cause.
As always, hoots and jeers are welcome, and if you're going to throw bottles, please empty them first (if there's on thing I can't stomach, it's wasting alcohol).
Last edited by LockBot on Sat Jul 29, 2023 10:00 pm, edited 1 time in total.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
Re: What caused my SSD to get ruined in Mint 21.1 install?
I really would say it died from natural causes.
-
- Level 5
- Posts: 519
- Joined: Fri Dec 23, 2022 10:43 am
Re: What caused my SSD to get ruined in Mint 21.1 install?
The SSD has a controller in it and that controller will only do what it is told to do so I cannot conceive of Mint telling the controller to execute some self destructive routine. I would say to send that SSD back as defective. Did you notice if the FW were the same on both SSD?
I have a Mushkin SSD in my Win 7 box that is about 12 or so years old. Even brand new it was a beast to get a good FW load into it. Much was written on the support site about the issues. A few times a year, on a boot that boot drive will not talk correctly to the BIOS and the boot hangs. Repower and it will work. One day it will just die and that will give me a reason to toss all of my Windows stuff into the garbage.
I have a Mushkin SSD in my Win 7 box that is about 12 or so years old. Even brand new it was a beast to get a good FW load into it. Much was written on the support site about the issues. A few times a year, on a boot that boot drive will not talk correctly to the BIOS and the boot hangs. Repower and it will work. One day it will just die and that will give me a reason to toss all of my Windows stuff into the garbage.
Re: What caused my SSD to get ruined in Mint 21.1 install?
Lou77, if FW is 'firmware,' no, I'm afraid I didn't notice. I bought two identical boxes, one to set up for a friend who wants to try Linux, which I delivered yesterday, so I no longer have access to the 'working' Mushkin SSD. And I wasn't suspecting Linux but I do have my doubts about the bios/kernel thing. I also was wondering whether UEFI or the Secure Boot setting might do something by to the drive by design so it wouldn't work under any other conditions.
I finished setting up the friend's box with the SDD pulled from the second one, after I'd updated its BIOS. I still was worried enough about the bios/kernel miscommunication being the cause so I before I risked the other Mushkin I did a generic install on box #1 using a small sacrificial Vaseky SSD. It seemed to go as advertised and, best of all, didn't hose the Vaseky.
I found a thread in a Dell forum that was a couple of years and 27 pages long about the BIOS "bug," but I didn't see anything about it making anybody's SDD unparitionable.
Pepi, I didn't mention that the first thing I did with the box was to boot it until it came to the initial Windows set-up screen, just to make sure it wasn't DOA. The SSDs in both had been stripped down to the Windows recovery partition so on first boot they would have gone through the initial installation, but my curiosity was satisfied as soon as that screen appeared, so I shut it off and started trying to install Mint. That doesn't mean it wasn't right then on death's doorstep but to that point it looked pretty normal.
I finished setting up the friend's box with the SDD pulled from the second one, after I'd updated its BIOS. I still was worried enough about the bios/kernel miscommunication being the cause so I before I risked the other Mushkin I did a generic install on box #1 using a small sacrificial Vaseky SSD. It seemed to go as advertised and, best of all, didn't hose the Vaseky.
I found a thread in a Dell forum that was a couple of years and 27 pages long about the BIOS "bug," but I didn't see anything about it making anybody's SDD unparitionable.
Pepi, I didn't mention that the first thing I did with the box was to boot it until it came to the initial Windows set-up screen, just to make sure it wasn't DOA. The SSDs in both had been stripped down to the Windows recovery partition so on first boot they would have gone through the initial installation, but my curiosity was satisfied as soon as that screen appeared, so I shut it off and started trying to install Mint. That doesn't mean it wasn't right then on death's doorstep but to that point it looked pretty normal.
Re: What caused my SSD to get ruined in Mint 21.1 install?
I don’t believe in “just died” or corrupted by FW/OS.
Does it have SMART data (Disks - hamburger menu)?
And I can’t understand the absence of error messages when you access the drive.
Writing to the drive? A hex editor deleting the first MB? Or dd, the disk destroyer?
Re: What caused my SSD to get ruined in Mint 21.1 install?
Doesn't "likely to fail soon" mean it hasn't failed yet?
I question whether that's a valid result but I think the SMART utility doesn't understand what it's looking at.
FWIW, It won't mount in Win10 because Windows sucks but the Win10 disk management applet can 'see' the drive and rates all partitions as "healthy."
Re: What caused my SSD to get ruined in Mint 21.1 install?
have a look with GSmartControl
details are important
details are important
Peter
Mate desktop https://wiki.debian.org/MATE
Debian GNU/Linux operating system: https://www.debian.org/download
Mate desktop https://wiki.debian.org/MATE
Debian GNU/Linux operating system: https://www.debian.org/download
-
- Level 5
- Posts: 519
- Joined: Fri Dec 23, 2022 10:43 am
Re: What caused my SSD to get ruined in Mint 21.1 install?
When a physician tells someone they have 6 months to live...it may actually be 4 months.Busker wrote: ⤴Wed Feb 01, 2023 2:54 pm
Doesn't "likely to fail soon" mean it hasn't failed yet?
.
.
I question whether that's a valid result but I think the SMART utility doesn't understand what it's looking at.
FWIW, It won't mount in Win10 because Windows sucks but the Win10 disk management applet can 'see' the drive and rates all partitions as "healthy."
You disk will not mount in Win10 not because Win10 "sucks" but because there is an issue with your disk that Windows does not like. Windows has mounted hundreds of millions of disks over the years. I would disable the SMART feature on your motherboard and attempt a format of the drive. Have you used Disk on the command line of Windows?
I don't think this is a Windows or Linux issue but an issue with the SSD itself. The SMART utility is telling you exactly what it sees and what it sees is in part driven by the controller and memory of your SSD. I doubt there is an issue with the SMART but maybe so. Have you contacted Mushkin support about this issue? They have pretty much closed down their support side for knowledge and users now need to open a support ticket. Years ago they had a forum that was helpful but it is long gone.
Re: What caused my SSD to get ruined in Mint 21.1 install?
There are so many outputs I'm guessing at what you might be after. Happy to provide whatever else you might want.
Code: Select all
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.0-137-generic] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Device Model: SSD 256GB
Serial Number: AB202200000310004979
Firmware Version: U0309A0
User Capacity: 256,060,514,304 bytes [256 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ACS-3 T13/2161-D revision 4
SATA Version is: SATA 3.2, 6.0 Gb/s (current: 1.5 Gb/s)
Local Time is: Thu Feb 2 09:05:03 2023 CST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is: Unavailable
APM feature is: Disabled
Rd look-ahead is: Enabled
Write cache is: Enabled
DSN feature is: Unavailable
ATA Security is: Disabled, NOT FROZEN [SEC1]
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: FAILED!
Drive failure expected in less than 24 hours. SAVE ALL DATA.
No failed Attributes found.
General SMART Values:
Offline data collection status: (0x02) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 120) seconds.
Offline data collection
capabilities: (0x11) SMART execute Offline immediate.
No Auto Offline data collection support.
Suspend Offline collection upon new
command.
No Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
No Selective Self-test supported.
SMART capabilities: (0x0002) Does not save SMART data before
entering power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 10) minutes.
SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate -O--CK 100 100 050 - 5
5 Reallocated_Sector_Ct -O--CK 100 100 050 - 4
9 Power_On_Hours -O--CK 100 100 050 - 62
12 Power_Cycle_Count -O--CK 100 100 050 - 17
160 Unknown_Attribute -O--CK 100 100 050 - 1
161 Unknown_Attribute PO--CK 100 100 050 - 88
163 Unknown_Attribute -O--CK 100 100 050 - 19
164 Unknown_Attribute -O--CK 100 100 050 - 2
165 Unknown_Attribute -O--CK 100 100 050 - 2
166 Unknown_Attribute -O--CK 100 100 050 - 2
167 Unknown_Attribute -O--CK 100 100 050 - 2
168 Unknown_Attribute -O--CK 100 100 050 - 5050
169 Unknown_Attribute -O--CK 100 100 050 - 100
175 Program_Fail_Count_Chip -O--CK 100 100 050 - 0
176 Erase_Fail_Count_Chip -O--CK 100 100 050 - 0
177 Wear_Leveling_Count -O--CK 100 100 050 - 0
178 Used_Rsvd_Blk_Cnt_Chip -O--CK 100 100 050 - 4
181 Program_Fail_Cnt_Total -O--CK 100 100 050 - 0
182 Erase_Fail_Count_Total -O--CK 100 100 050 - 0
192 Power-Off_Retract_Count -O--CK 100 100 050 - 15
194 Temperature_Celsius -O---K 100 100 050 - 35
195 Hardware_ECC_Recovered -O--CK 100 100 050 - 0
196 Reallocated_Event_Count -O--CK 100 100 050 - 1
197 Current_Pending_Sector -O--CK 100 100 050 - 4
198 Offline_Uncorrectable -O--CK 100 100 050 - 1
199 UDMA_CRC_Error_Count -O--CK 100 100 050 - 0
232 Available_Reservd_Space -O--CK 100 100 050 - 88
241 Total_LBAs_Written ----CK 100 100 050 - 2801
242 Total_LBAs_Read ----CK 100 100 050 - 1263
245 Unknown_Attribute -O--CK 100 100 050 - 16
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning
General Purpose Log Directory Version 1
SMART Log Directory Version 1 [multi-sector log support]
Address Access R/W Size Description
0x00 GPL,SL R/O 1 Log Directory
0x01 SL R/O 1 Summary SMART error log
0x02 SL R/O 1 Comprehensive SMART error log
0x03 GPL R/O 1 Ext. Comprehensive SMART error log
0x04 GPL,SL R/O 8 Device Statistics log
0x06 SL R/O 1 SMART self-test log
0x07 GPL R/O 1 Extended self-test log
0x10 GPL R/O 1 NCQ Command Error log
0x11 GPL R/O 1 SATA Phy Event Counters log
0x24 GPL R/O 88 Current Device Internal Status Data log
0x25 GPL R/O 32 Saved Device Internal Status Data log
0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log
0x80-0x9f GPL,SL R/W 16 Host vendor specific log
SMART Extended Comprehensive Error Log Version: 1 (1 sectors)
No Errors Logged
SMART Extended Self-test Log Version: 1 (1 sectors)
No self-tests have been logged. [To run self-tests, use: smartctl -t]
Selective Self-tests/Logging not supported
SCT Commands not supported
Device Statistics (GP Log 0x04)
Page Offset Size Value Flags Description
0x01 ===== = = === == General Statistics (rev 1) ==
0x01 0x008 4 17 --- Lifetime Power-On Resets
0x01 0x010 4 62 --- Power-on Hours
0x01 0x018 6 183581424 --- Logical Sectors Written
0x01 0x020 6 860420 --- Number of Write Commands
0x01 0x028 6 82781660 --- Logical Sectors Read
0x01 0x030 6 1359051 --- Number of Read Commands
0x07 ===== = = === == Solid State Device Statistics (rev 1) ==
0x07 0x008 1 0 --- Percentage Used Endurance Indicator
|||_ C monitored condition met
||__ D supports DSN
|___ N normalized value
SATA Phy Event Counters (GP Log 0x11)
ID Size Value Description
0x0001 4 0 Command failed due to ICRC error
0x0002 4 0 R_ERR response for data FIS
0x0005 4 0 R_ERR response for non-data FIS
0x000a 4 4 Device-to-host register FISes sent due to a COMRESET
Re: What caused my SSD to get ruined in Mint 21.1 install?
"You disk will not mount in Win10 not because Win10 "sucks" but because there is an issue with your disk that Windows does not like. "
The "issue" is called EXT4.
The "issue" is called EXT4.
Re: What caused my SSD to get ruined in Mint 21.1 install?
The drive is not in the database, so the values are “vendor specific”, pretty useless.
Even ID-number and corresponding meaning is insecure.
What it does show is the drive controller is alive.
However:
ID# 192, if true, is strange.
IDs# 196, 197 and 198 may not be OK.
Bad sector counts after manufacturing are hidden, thus the raw values should be zero.
For some information re IDs see https://en.wikipedia.org/wiki/Self-Moni ... Technology
Even ID-number and corresponding meaning is insecure.
What it does show is the drive controller is alive.
However:
ID# 192, if true, is strange.
IDs# 196, 197 and 198 may not be OK.
Bad sector counts after manufacturing are hidden, thus the raw values should be zero.
For some information re IDs see https://en.wikipedia.org/wiki/Self-Moni ... Technology
What about simply reformatting the drive to MBR using Disks?
-
- Level 5
- Posts: 519
- Joined: Fri Dec 23, 2022 10:43 am
Re: What caused my SSD to get ruined in Mint 21.1 install?
Therein is the problem sometimes with SMART. Vendors not in the dbase or, they change values/normalized to different values
or use attributes for something else. Sort of like displaying a value in minutes instead of hours. A drive may look like it has hundreds of thousands of hours when really you need to divide the value by 60 to get the actual use time.
Another interesting link...
SmartmonTools
OP can contact Mushkin and give them the SMART test results and ask them for an explanation.
Re: What caused my SSD to get ruined in Mint 21.1 install?
Is it still important to leave some free space on a SSD for wear leveling? I only had one SSD go bad and that was a 240 GB one. That was 4 years ago and I don't remember If I left any unformatted/unpartitioned free space and the drive was dead/read only in one month, but the 500GB and 1TB drives didn't have any issues
Re: What caused my SSD to get ruined in Mint 21.1 install?
I have a Samsung 850 EVO 250GB SSD (SATA) since May 2015 in my primary PC (which I assume is still one of the best basic SATA based SSD's), which is basically powered on all of the time and while my SSD is far from full (since I just keep the general OS on it with basic programs as my large files I put on regular hard drives in general which probably means I generally am not using more than 30-40GB (in the past I am sure I had more used space but probably never more than about half full)) I never had any issues with it. but I have heard it's best not to fill a SSD too much to keep it running optimally. my best guesstimate is as long as your SSD is not nearly full chances are you are good. like on 250GB I would imagine if you are not using more than about 150GB, maybe 200GB, you are probably safe enough. but honestly, it's best in general to use a SSD for general booting/basic programs and keep all of ones larger files like games/videos on a regular hard drive as this is optimal this way if you ask me.JeremyB wrote: ⤴Thu Feb 02, 2023 7:10 pm Is it still important to leave some free space on a SSD for wear leveling? I only had one SSD go bad and that was a 240 GB one. That was 4 years ago and I don't remember If I left any unformatted/unpartitioned free space and the drive was dead/read only in one month, but the 500GB and 1TB drives didn't have any issues
anyways, I just ran a quick 'sudo smartctl -A /dev/sdx' on it and it shows...
-Power on Hours = 64449 (7.36 years. so roughly a bit over 7 years and 4 months (7.33 years)) (in 3 months (May 2023) I have had it 8 years)
-Total_LBA's_Written = 57565994003 (i.e. 26.806TBW) (official rated write life of this is 75TBW and most likely will do at least twice that before any problems potentially turn up. even if this dies next week, they are cheap enough at this point to where getting a 250GB replacement is not expensive at all as when I got mine in May 2015 I think it was about $120 which was around the time SSD's started to become more reasonably priced for capacities no smaller than 250GB)
-Wear_Leveling_Count (i.e. Drive Health) = 93%
but from what I can tell the gist of it is to avoid the non-brand name SSD's (like those generic/no-name SSD's I would imagine are more prone to failure in general) and I would imagine they should last at least years in general, maybe decades. I don't expect my Samsung to die anytime soon.
p.s. on my Intel 545s 128GB and Kingston UV400 120GB SSD's (I just got UV400 used and it's still got plenty of life left in it most likely given the TBW vs official 50TBW limit) I have to run 'sudo smartctl -x /dev/sdx' to see 'Logical Sectors Written'. I can see total data read on these two I believe unlike the Samsung which does not show, not that it matters since the TBW is typically most important.
MainPC: i5-3550 (undervolted by -0.120v (CPU runs 12c cooler) /w stock i3-2120 hs/fan) | 1050 Ti 4GB | 16GB (2x 8GB) DDR3 1600Mhz RAM | Backups: AMD E-300 CPU (8GB RAM) / Athlon X2 3600+ CPU (@2.3GHz@1.35v) (4GB RAM) | All /w Mint 21.x-Xfce
Re: What caused my SSD to get ruined in Mint 21.1 install?
I tried both ATA enhanced secure erase and overwrite with zeros. In both cases it executes without complaint, the partitions graphic even goes green all the way across showing it's changed to a single partition for just an instant. Then I guess it recans because the image goes back to the original partitioning scheme. A minute or so later, this comes up:
dmesg notes several orphan inodes deleted but it says the same every time its run.
Re: What caused my SSD to get ruined in Mint 21.1 install?
I suggest you try again with Gparted.Busker wrote: ⤴Sat Feb 04, 2023 2:42 pmI tried both ATA enhanced secure erase and overwrite with zeros. In both cases it executes without complaint, the partitions graphic even goes green all the way across showing it's changed to a single partition for just an instant. Then I guess it recans because the image goes back to the original partitioning scheme. A minute or so later, this comes up:
dmesg notes several orphan inodes deleted but it says the same every time its run.
in my experience gnome-disk-utility aka "Disks" is a bit buggy
Peter
Mate desktop https://wiki.debian.org/MATE
Debian GNU/Linux operating system: https://www.debian.org/download
Mate desktop https://wiki.debian.org/MATE
Debian GNU/Linux operating system: https://www.debian.org/download
Re: What caused my SSD to get ruined in Mint 21.1 install?
So you got an error, that’s both, good and bad.
(unsure whether GParted or Disks would be more reliable, don’t have a preference)
Tried the disk destroyer?
Likely sudo hdparm -I /dev/sdyourdriveletter will not show any anomalies?
- I think the drive is simply toast.
The #192 may hint at a broken solder / copper trace / capacitor.
It could result also from a bad SATA power connector, thus take care with this machine and check the SMART data frequently after installing a new drive.
(unsure whether GParted or Disks would be more reliable, don’t have a preference)
Tried the disk destroyer?
Likely sudo hdparm -I /dev/sdyourdriveletter will not show any anomalies?
- I think the drive is simply toast.
The #192 may hint at a broken solder / copper trace / capacitor.
It could result also from a bad SATA power connector, thus take care with this machine and check the SMART data frequently after installing a new drive.
Re: What caused my SSD to get ruined in Mint 21.1 install?
I tried using DD to overwrite the MBR and the individual partitions. They all ran like everything was hunky-dory but when I re-scanned with fdisk, nothing had changed. Here's the output from the hdparm:sanmig wrote: ⤴Sat Feb 04, 2023 6:05 pm ...Tried the disk destroyer?
Likely sudo hdparm -I /dev/sdyourdriveletter will not show any anomalies?...
The #192 may hint at a broken solder / copper trace / capacitor.
It could result also from a bad SATA power connector, thus take care with this machine and check the SMART data frequently after installing a new drive.
Code: Select all
$ sudo hdparm -I /dev/sdj
[sudo] password for tux:
/dev/sdj:
ATA device, with non-removable media
Model Number: SSD 256GB
Serial Number: AB202200000310004979
Firmware Revision: U0309A0
Media Serial Num:
Media Manufacturer:
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0
Standards:
Used: unknown (minor revision code 0x011b)
Supported: 10 9 8 7 6 5
Likely used: 10
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 500118192
Logical Sector size: 512 bytes
Physical Sector size: 512 bytes
Logical Sector-0 offset: 0 bytes
device size with M = 1024*1024: 244198 MBytes
device size with M = 1000*1000: 256060 MBytes (256 GB)
cache/buffer size = unknown
Form Factor: 2.5 inch
Nominal Media Rotation Rate: Solid State Device
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 1 Current = 1
Advanced power management level: disabled
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* NOP cmd
* DOWNLOAD_MICROCODE
Advanced Power Management feature set
* 48-bit Address feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* WRITE_UNCORRECTABLE_EXT command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Gen3 signaling speed (6.0Gb/s)
* Native Command Queueing (NCQ)
* Phy event counters
* READ_LOG_DMA_EXT equivalent to READ_LOG_EXT
DMA Setup Auto-Activate optimization
* Software settings preservation
* DOWNLOAD MICROCODE DMA command
* WRITE BUFFER DMA command
* READ BUFFER DMA command
* Data Set Management TRIM supported (limit 8 blocks)
Security:
Master password revision code = 65534
supported
not enabled
not locked
not frozen
not expired: security count
supported: enhanced erase
6min for SECURITY ERASE UNIT. 6min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
Re: What caused my SSD to get ruined in Mint 21.1 install?
I don't believe that software killed the drive.
Replace it and move on.
Replace it and move on.
Peter
Mate desktop https://wiki.debian.org/MATE
Debian GNU/Linux operating system: https://www.debian.org/download
Mate desktop https://wiki.debian.org/MATE
Debian GNU/Linux operating system: https://www.debian.org/download
-
- Level 5
- Posts: 519
- Joined: Fri Dec 23, 2022 10:43 am
Re: What caused my SSD to get ruined in Mint 21.1 install?
Agreed, Plus 1, +1, Yep.
I do not know what Mushkin does for the protected sectors of the SSD but, using a Hex Editor in an area where one does not know what the fields are intended for is not a good move. Writing zeros to memory won't fix the issue. Possibly getting a good format would help but getting a good format seems to be an issue, or the controller seeing that format is an issue. None the less, this drive failure is not a Linux generated problem. Send it back or toss it out.