It would seem that some of my ancient 10+ year old SSDs in the 60-128gb range are experiencing sudden death after sitting around for years. They're basically bricked and can't be written to.

EDIT: This should be a PSA. SSDs are not for cold storage.

@Lydie This is strange though. They don't last forever by any means but they absolutely should last more than ten years. I have a number of solid state storage devices that have definitely very far exceeded 10+ years.

I have seen claims that it is necessary to completely rewrite them once every so many years though. Not sure how true that really is. There was a lot of hokum along with such claims (like that defragging them was the same thing as trimming them — it is not!) I can certainly say when I pulled out my old Cowon D2 from the closet, the ancient SD card (full SD, not micro, lol) still worked fine in it as did its own firmware which was last flashed a lot longer than 10+ years ago. (Probably like 2010-ish?)

@nazokiyoubinbou Maybe early SLC/MLC is more resilient. These may be early TLC that died on me.
@Lydie No clue, but I really feel like there is something going on here. Like maybe the actual chip didn't fail, but instead some cheap cap or something even.
@nazokiyoubinbou @Lydie Yes I remember reading somewhere about amplifiers, in príncipe they can last for decades but some caps in them may have a life expectancy of about 10-11 years
@Lydie @nazokiyoubinbou Yup, the older flash versions are much more resilient – they both use bigger cells, and each cell needs to store fewer states – for SLC it's just two, for MLC it's four; TLC is 8 and QLC is 16, and newer flash basically uses just a few electrons to distinguish between states. Since those electrons will leak, it's only a matter of time before unpowered flash memory dies.
@nazokiyoubinbou @Lydie The charge (i.e. your data) dissipates over time if the SSD is not powered on. You need to power it up every once in a while so that it can refresh the data in the cells.
@Lydie maybe I can find room in my office for one of these bad boys
@bucknam I bet it's more reliable than today's stuff 😆

@bucknam @Lydie We had a couple of these giant tape based storage things out at the MFE (Magnetic confinement Fusion) project.

I have no idea how reliable they were in terms of the storage, but they could be quite dangerous to life and limb if you went inside when it was running.

@Lydie
Have they been unpowered? I never quite managed to find a definitive answer on if/how often modern flash needs to be refreshed, and if/when the controllers actually do it.
@srtcd424 Yup, unpowered. Just old and in a box. I'm sure not every drive would brick, but certain controller types may if the NAND is too dirty. So PSA stands, because you can't know about your particular units.
@Lydie
Yeah, I think one of the studies I did manage to find said the problem mainly manifested with very 'worn' cells. Turns out us old timers were right about magtape being the only true long term storage, huh? :)
@srtcd424 tape and HDDs! my 20mb MFMs and most of my 5-1/4 still readable today!
@Lydie are they still readable? I seem to remember that old SSDs fail, but can still be backed up (but I may be imagining that)
@BackFromTheDud not sure, plugged in to write an image. then dropped off USB bus
@BackFromTheDud @Lydie Going read-only is a failure mode when the drive runs out of reserve space; I've seen it exactly once so far, usually drives just disappeared from the bus (or in case of some A-Data drives, they started reporting as a 4 kB drive).
@Lydie I had read not long ago that they need some level of activity at least once a year (not entirely different from refreshing DRAM, though on vastly different timescales). I wouldn't be able to tell you what exactly needs to be done to them to keep them in usable condition, though.

@Lydie

Quoting a SciAm article from over 20 years ago...

"Digital media last forever, or five years... which ever comes first."

@Lydie
You implied they could be read which is a total win. ruffalo-hulk.gif
SSD Lifespan: How Long Does an SSD Last?

They'll probably outlast your regular hard disk drives.

How-To Geek
@Lydie my own SSD just failed catastrophically after about 10 years of constant use - this isn't statistically important but I can feel your pain.

@Lydie I had my early 2bit-MLC 120GB drive (Supertalent) die a sudden death this year. But it is from 2008 and was in (semi) regular use since. After ~ 5 years in my regular daily used PC, after that in my home media PC for OS and VLC player, but that PC was used less and less over time.
It just wouldn't show up in BIOS any more.

My ~2014 SanDisk TLC 240GB still going strong thankfully.

@Lydie
SSDs are rated to retain data unpowered for six months for consumer grade devices, and for three months for enterprise grade. They need to stay powered up so that they can scrub (refresh) memory cells that lose their charge over time. This is because each data bit is actually stored by a surprisingly small number of electrons in modern NAND flash devices.
Spinning drives, despite their potential mechanical failure modes, are actually better for cold storage than SSDs.
@brouhaha I had no idea D: that kinda scares me!
@felipe @brouhaha my st412 have better memory than me! 😆
@felipe
In practice, if not stored at high temperature, they'll probably hold data a lot longer than those guaranteed minimums, but you shouldn't expect five years or more.
If you do use an SSD for offline data storage, you should probably power it up periodically, and do something that forces a read of the entire device. In Linux or macOS, something like "dd bs=1M if=/dev/nvme0n1 of=/dev/null" (substitute correct device name) would be suitable. I'm not sure of an equivalent for Windows.
@brouhaha @felipe DD has been ported to Windows 😀
@Lydie @felipe
and I know that Windows does assign internal pathnames to raw disk devices, but I don't know how one finds them. It's not just e.g. "C:", but rather something like "\\.\PhysicalDrive0". There are published C code examples to enumerate the physical drives from a program. There's probably a way to do it in Powershell, but I haven't found it.
@brouhaha @felipe @Lydie I do believe the number Get-Disk returns is the same number as \\.\PhysicalDriveX
@brouhaha @felipe @Lydie ah, wmic diskdrive list brief returns the actual \\.\ paths, and yes they're the same as get-disk
@sijmen @felipe @Lydie
Sadly, wmic is deprecated, without any clear guidance from Microsoft as to what takes its place. IMNSHO, deprecating it without providing a guide detailing how to replace every usage of it is completely f#@&ing insane.
But until they actually remove it, this is good to know.
@brouhaha @felipe @Lydie knowing Microsoft, it isn't getting removed anytime soon
@brouhaha @sijmen @felipe @Lydie you can do WMI things in powershell. Don't have my Windows laptop handy right now, though.
@rogerlipscombe @sijmen @felipe @Lydie
Is there a trivial translation, given a wmic command, to the Powershell equivalent?

@brouhaha @sijmen @felipe @Lydie dunno; it's been years since I last looked at this stuff.

Maybe start here? https://powershell.one/wmi/commands#querying-information

WMI Commands - powershell.one

PowerShell supports WMI with a number of cmdlets. We'll take a look at how they work, and how you get started with WMI.

@brouhaha @Lydie @felipe This port of dd can list the devices by running dd --list (hint: \\?\Device\Harddisk0\Partition0 refers to the whole disk, Partition1 and up are the actual partitions).
dd for windows

@brouhaha @felipe I do wonder how well drives that are powered on remember data that's not accessed; that also worries me! A full drive read seems the best bet.
@penguin42 @felipe
As I understand it, the drives from reputable brands do some background scrub process that does over time read all of the blocks, do error correction, and if necessary, reallocation. Since the manufacturers consider such details proprietary, I agree that a periodic read of the entire device is a good idea. I do that about once a month, scheduled for the wee hours of the morning.
@brouhaha @penguin42 @felipe I remember the early Samsung TLC drives (840 IIRC) had a problem where reading rarely accessed data would become really slow; this was solved in a firmware update, but you also had to run a manual refresh when updating. Shouldn't be a problem with newer drives.

@brouhaha @felipe There is an easy to use windows tool called DiskFresh which is free and will do a full disk read or read+write refresh of a disk.

https://www.puransoftware.com/DiskFresh.html

I've use it primarily on older hard disks to refresh the surfaces and take a look at SMART logs afterwards to look for signs the drive is going south. The software will also report bad sectors and such that it encounters during a refresh operation.

DiskFresh - Refresh Hard Disk Signal

@brouhaha @felipe I find it a nice tool that I can use on a running Windows system in the background (though it can sometimes take days to run given the size of modern HDDs) instead of having to offline the system to run something like Spinrite.

@brouhaha

Oh wow. I've been suspecting something like that, I had no idea the interval was so short.

I've made sure that one of my machines has a spinny rust drive. I will probably add one more.

Thanks for that!

@Lydie

@tomjennings @Lydie
As I just wrote in another reply, that's the rated minimum over the full temperature range, so in reality it generally isn't quite that bad. It's sort of the storage industry's deep dark secret. It's not actually a secret, but the manufacturers definitely do not want to call attention to it.
NOR flash has historically been rated for decades of retention, so many of us incorrectly assumed that ultra-high-density NAND would be similar, but unfortunately it isn't.

@brouhaha @Lydie

Wow, this will have huge ramifications for historical data retention. Worlds will disappear, mostly small scale and at-home stuff.

@tomjennings @brouhaha @Lydie yes it's sad, awful and infuriating. Fortunately all my data and backup drives are spinning rust.
@tomjennings @brouhaha @Lydie a lot of random other flash just degrades too. USB sticks, SD cards, even on device firmware slowly dribbles it's brains out.
@etchedpixels @tomjennings @brouhaha @Lydie We've been discovering at the museum that it's often harder to get old computers from the 90s running than stuff from the 60s or 70s, because of all the batteries and nvram everywhere.

@davefischer

Too, densities were low (components and recorded data) and lots of standardized parts that appear in catalogs were used. Ditto old automobiles; my early 60's Rambler have *complete parts catalogs*!

By the 80's, quantities were up enough such that custom ASICS and masked ROMs and such became common. Now everything is a brick.

We did demand this...

@etchedpixels @brouhaha
@Lydie

@davefischer this reminds me of 1920s steam locomotives (big levers, pins, valves) vs. 1980s locomotives (largely undocumented PLCs) in preservation, too.
@davefischer @etchedpixels @tomjennings @Lydie
I think it will be possible for CHM to have their restored IBM 1401 and DEC PDP-1 computers, introduced in 1959 and 1960, respectively, still running 50 years from now, with some components replaced, but not fully a "Computer of Theseus."
For computers from the 1990s and newer, that's not at all likely.

@brouhaha

Right! My daily drivers were all over 50 years until this year.

There will not be 50 year old priuses. Or 50 year old 21st C cars, at all. Too much shit plastic and bespoke parts.

@davefischer @etchedpixels @Lydie

@tomjennings @davefischer @etchedpixels @Lydie
The oldest car I've owned was a 1979, which originally belonged to my grandparents. Unfortunately I totalled it. Currently I have 1992, 2004, and 2021. That's one more that I can actually justify. The 2021 is my daily driver and the 2004 is a 4WD SUV for hauling stuff and for bad road conditions.

@davefischer @etchedpixels @tomjennings @brouhaha @Lydie

I expect the primary problem of getting middle aged computers working is the lack of availability of the NVROM code.

@BobCollins @davefischer @etchedpixels @tomjennings @Lydie
In the short term. In the longer term, everything with ICs beyond SSI/MSI will become impossible to repair short of fabricating new emulation devices. I expect basic silicon transistors (but not germanium small-signal transistors) and passive components to still be available for a long time.
@BobCollins @davefischer @etchedpixels @tomjennings @Lydie
In logic circuits, often germanium transistors can be replaced with silicon. Not as authentic as one might prefer, but still much more authentic than replacing an LSI chip with a new FPGA.
#ComputerOfTheseus
@BobCollins @davefischer @etchedpixels @tomjennings @Lydie
More VLSI chips have internal firmware in floating-gate EPROM or flash than you might expect. That includes chips that seem on the surface to have simple functionality. Those chips will fail much sooner than general chip failure due to e.g. metal or ion migration.
@etchedpixels @tomjennings @brouhaha @Lydie petition to make "dribbles its brains out" an industry standard term for bitrot of unused devices