Would you return a hard drive with 1 uncorrectable error after 130 hours of work?
Would you return a hard drive with 1 uncorrectable error after 130 hours of work?
Seriously, do not use LLMs as a source of authority. They are stochistic machines predicting the next character they type; if what they say is true, it’s pure chance.
Use them to draft outlines. Use them to summarize meeting notes (and review the summaries). But do not trust them to give you reliable information. You may as well go to a party, find the person who’s taken the most acid, and ask them for an answer.
Because it’s like a search box you can explain a problem to and get a bunch of words related to it without having to wade through blogspam, 10 year old Reddit posts, and snippy stackoverflow replies. You don’t have to post on discord and wait a day or two hoping someone will maybe come and help. Sure it is frequently wrong, but it’s often a good first step.
And no I’m not an AI bro at all, I frequently have coworkers dump AI slop in my inbox and ask me to take it seriously and I fucking hate it.
It is not a search box. It generates words we know are confidently wrong quite often.
“Asking” gpt is like asking a magic 8 ball; it’s fun, but it has zero meaning.
Well that’s just blatantly false. They’re extremely useful for the initial stage of research when you’re not really sure where to begin or what to even look for. When you don’t know what you should read or even what the correct terminology is surrounding your problem. They’re “Language models”, which mean they’re halfway decent at working with language.
They’re noisy, lying plaigarism machines that have created a whole pandora’s box full of problems and are being shoved in many places where they don’t belong. That doesn’t make them useless in all circumstances.
Not false, and shame on you for suggesting it.
I not only disagree, but sincerely hope you aren’t encouraging anyone to look up information using an LLM.
LLMs are toys right now.
The part I’m calling out as untrue is the „magic 8 ball” comment, because it directly contradicts my own personal lived experience. Yes it’s a lying, noisy, plagiarism machine, but its accuracy for certain kinds of questions is better than a coin flip and the wrong answers can be useful as well.
Some recent examples
Just because you don’t have the problems that LLMs solve doesn’t mean that nobody else does. And also, dude, don’t scold people on the internet. The fediverse has a reputation and it’s not entirely a good one.
My wife’s company is using one that listens to meetings and writes up summaries, including calling out action items and bullet points. It’s really quite good, and - importantly - people in the meeting mostly aren’t consciously aware of it, in the way that they’re not using magic keywords like “action item.”
I’m not down on LLMs, per se; it’s just that they’re expert tools for pretending to know things, which they don’t. They’re expert liars. They have some value, and can be useful.
I do have a strong emotional reaction when I hear people say things like, “this AI says I should do X.” My sister is really bad about this, and it’s in part because she is utterly ignorant about how computers work. It sounds like a person, so it must be just like a person.
NO! NO! IT’S ALL LIES! It’s worse than a person, because people can be confidently wrong, but LLMs have no idea. They’re not conscious. They aren’t even trying to trick you, because that would imply they know what the truth is. The fact is that they know nothing.
First sentence of each paragraph: correct.
Basically all the rest is bunk besides the fact that you can’t count on always getting reliable information. Right answers (especially for something that is technical but non-verifiable), wrong reasons.
There are “stochastic language models” I suppose (e.g., click the middle suggestion from your phone after typing the first word to create a message), but something like chatgpt or perplexity or deepseek are not that, beyond using tokenization / word2vect-like setups to make human readable text. These are a lot more like “don’t trust everything you read on Wikipedia” than a randomized acid drop response.
Seagate’s error rate values (IDs 1, 7, and 195) are busted. Not in that they’re wrong or anything, but that they’re misleading to people who don’t know exactly how to read them.
ALL of those are actually reporting zero errors. This calculator can confirm it for you, but s.i.wtf
It is likely that GSmartControl is simply reading either the normalized or raw values, seeing a non-100/non-0 value respectively, and reporting that as an error.
SMART data can be hard to read. But it doesn’t look like any of the normalized values are approaching the failure thresholds. It doesn’t show any bad sectors. But it does show read errors.
I would check the cable first, make sure it’s securely connected. You said it clicks sometimes, but that could be normal. Check the kernel log/dmesg for errors. Keep an eye on the SMART values to see if they’re trending towards the failure thresholds.
Inside the nominal return period for a device absolutely.
If it’s a warranty repair I’ll wait for an actual trend, maybe run a burning on it and force its hand.