Would you return a hard drive with 1 uncorrectable error after 130 hours of work?

https://lemmy.world/post/26605582

Would you return a hard drive with 1 uncorrectable error after 130 hours of work? - Lemmy.World

ChatGPT is dismissing it, but Iโ€™m not so sure. [https://lemmy.world/pictrs/image/e42290ef-a257-40f6-9bb5-88ed6f40b852.png] [https://lemmy.world/pictrs/image/2d73cacd-4e08-4cc8-9c7a-67f67737c31d.png]

Seriously, do not use LLMs as a source of authority. They are stochistic machines predicting the next character they type; if what they say is true, itโ€™s pure chance.

Use them to draft outlines. Use them to summarize meeting notes (and review the summaries). But do not trust them to give you reliable information. You may as well go to a party, find the person whoโ€™s taken the most acid, and ask them for an answer.

yeah, thatโ€™s why Iโ€™m here, dude.
So then, if you knew this, why did you bother to ask it first?

Because itโ€™s like a search box you can explain a problem to and get a bunch of words related to it without having to wade through blogspam, 10 year old Reddit posts, and snippy stackoverflow replies. You donโ€™t have to post on discord and wait a day or two hoping someone will maybe come and help. Sure it is frequently wrong, but itโ€™s often a good first step.

And no Iโ€™m not an AI bro at all, I frequently have coworkers dump AI slop in my inbox and ask me to take it seriously and I fucking hate it.

But once you have itโ€™s output, unless you already know enough to judge if itโ€™s correct or not you have to fall back to doing all those things you used the AI to avoid in order to verify what it told you.
Sure, but you at least have something to work with rather than whatever you know off the top of your head