Time to start a series of posts on how LLMs don't actually know anything! https://thetrek.co/they-trusted-chatgpt-to-plan-their-hike-and-ended-up-calling-for-rescue/ 1/
LLMs don't actually know anything. https://futurism.com/therapy-chatbot-addict-meth 2/
LLMs don't actually know anything. 4/
RFK Jr. Says AI Will Approve New Drugs at FDA 'Very, Very Quickly'

"We need to stop trusting the experts," Kennedy told Tucker Carlson.

Gizmodo
LLMs don't actually know anything. 6/
LLMs don't actually know anything. (And they'll make that your problem!) https://www.holovaty.com/writing/chatgpt-fake-feature/
7/
Adding a feature because ChatGPT incorrectly thinks it exists | Holovaty.com

LLMs don't actually know anything. 8/
https://x.com/jasonlk/status/1946069562723897802 (via @nixCraft )
Billionaires Convince Themselves AI Chatbots Are Close to Making New Scientific Discoveries

"I'm doing the equivalent of vibe coding, except it's vibe physics."

Gizmodo

A sketchy doctor put two people in the hospital in critical condition. But he's convinced it's not his fault, because "an artificial intelligence app" told him it wasn't. He has yet to realize: LLMs don't actually know anything. 10/

https://www.propublica.org/article/peptide-injections-raadfest-rfk-jr

LLMs don't actually know anything. 11/
https://writing.exchange/@Harlander/115063980995867850
Google AI Falsely Says YouTuber Visited Israel, Forcing Him to Deal With Backlash

YouTuber Benn Jordan has never been to Israel, but Google's AI summary said he'd visited and made a video about it. Then the backlash started.

404 Media
The perils of letting AI plan your next trip

An imagined town in Peru, an Eiffel tower in Beijing: travellers are increasingly using tools like ChatGPT for itinerary ideas – and being sent to destinations that don't exist.

BBC

LLMs especially don't know anything about the law. 14/

https://www.404media.co/18-lawyers-caught-using-ai-explain-why-they-did-it/

18 Lawyers Caught Using AI Explain Why They Did It

Lawyers blame IT, family emergencies, their own poor judgment, their assistants, illness, and more.

404 Media
LLMS continue to not know anything. 15/
And maybe 15 examples in isn't the best time to explain my point in this thread. But LLMs are statistical agglomerations of words. Words are all they have. They do not have experience or knowledge that for us is deeply integrated with our words. It's like a well-read virgin who has never been in a relationship or even had friends confidently setting up shop as a sex advice columnist. It's all words and only words, with no actual meaning to back it. But because LLMs simulate conversation, people, reasonably, keep mistaking statistically extruded text for something meaningful. 16/
Reddit's AI Suggests Users Try Heroin

AI-generated Reddit Answers are giving bad advice in medical subreddits and moderators can’t opt out.

404 Media