it just gives answers that sound right to people who are uneducated on the topic being asked about.

That’s not what it does or its aim. It can give accurate answers. The challenge is training it properly, and the operator understanding and communicating the right context of the questions they’re asking.

It’s not actually intelligent at all,

Its not intelligence in the way humans think about analyzing knowledge. However it generally uses very complicated pattern recognition which is a definition we use for humans too.

It’s a god damn black box.

You will never know if it is actually properly trained, or if it is just hallucinating the correct answer in the moment.

Actually the dumbest fucking technology ever invented.

“Hey what if we made a computer pretend to write something that was already written but make it spicy so sometimes the computer just straight up makes shit up, and you don’t know which it is unless you already knew the answer to the question you asked it.”

It’s a god damn black box.

It? Define “It”

You’re talking about it as if only one exists lol

Are you shitting on chatgpt or LLMs as a whole? Because it doesn’t seem like you’re capable of discerning the difference…

Neural networks and similar huge teachable policies used in AI have generally the problem of being a black box. You have absolutely no idea what is happening in there. Only thing you know is that it has some accuracy. We are “shitting” on anything AI. It is a great tool but should not be trusted in any safety or similar critical fields.

It is a great tool but should not be trusted in any safety or similar critical fields.

I agree. But doesn’t that mean your disagree with the comment I replied to? The one that implied we can’t know how it was trained (we can of we choose the llm or train it ourselves) and that the tech was useless?

No, I think you misinterpeted (or the original commenter was not specific egough) what black box refers to here. I don’t mean that they are proprietary or trained in a private/secret way, I mean the model itself is so huge and impossible to understand, that it is basically a black box. There are millions and billions of connections and parameters that are not adhering to any well defined structure, they just came to form magically by the learning process. You look at a neural network and you have absolutely no idea why it works.

This is one of the biggest challenges of bringing AI into the automotive industry for example. A neural network by itself is not certifiable due to not being able to prove that it works. I heard about a new-ish field that is trying to engineer structured networks specifically for automotive and similar applications, but havent heard anything since, and can’t find an article for it on Wikipedia.

I was replying specifically to this section of text as a whole:

It’s a god damn black box.

You will never know if it is actually properly trained, or if it is just hallucinating the correct answer in the moment.

Actually the dumbest fucking technology ever

Sounds to me like a Luddite who dismisses an entire field of research because they’re stuck on only the annoying use cases. How much technology do we use today that met similar criticisms at their inception? “I don’t have all the answers right now therefore no one will ever have them”