Before trusting an AI to tell you about stuff you don’t know, ask it to tell you about things you’re an expert in.

@kfury Ah yes. As a botanist, I asked ChatGPT for the native range of one of the worst weeds on the planet, where copious information is available online that they would have used as training data. In response, it listed most of the native range as part of the invaded range. Not even a complex question of understanding, just a fact look-up, still botched the answer.

Next step: not falling for Gell-Mann amnesia, or a test like this will be for nought.

@anschmidtlebuhn @kfury

That's because LLMs do not "look up facts". Rather, they construct plausible sentences using the statistical relationships between words. If that sentence is not factual, tough.

@markstahl @anschmidtlebuhn @kfury Precisely - why on earth would you expect an AI/LLM to give you anything other than a seemingly plausible response based solely on the statistical relationship between words - you’d have more likelihood of a reliable reply from cadavers resurrected with lightning bolts.

@frankcat @anschmidtlebuhn @kfury

This is where people confuse LLMs with intelligence.

The human brain makes a model of the world, which it is constantly testing against experience. For humans, language is merely the interface we use to communicate our internal model to other human beings. It's a lossy translation of a hidden model of reality that is itself non-verbal.

But in LLMs, words are all there is. There is no underlying model of reality behind them. It's just words strung together in ways that imitate human communication.

The phase "stochastic parrot" is extremely accurate.

@frankcat @anschmidtlebuhn @kfury

An LLM will literally "believe" anything you tell it. Take a look at what they are asking the Gab AI to believe.

https://infosec.exchange/@bontchev/112257849039442072

VessOnSecurity (@[email protected])

Attached: 1 image Somebody managed to coax the Gab AI chatbot to reveal its prompt:

Infosec Exchange