LLMs don't actually know anything. 4/
RFK Jr. Says AI Will Approve New Drugs at FDA 'Very, Very Quickly'

"We need to stop trusting the experts," Kennedy told Tucker Carlson.

Gizmodo
LLMs don't actually know anything. 6/
LLMs don't actually know anything. (And they'll make that your problem!) https://www.holovaty.com/writing/chatgpt-fake-feature/
7/
Adding a feature because ChatGPT incorrectly thinks it exists | Holovaty.com

LLMs don't actually know anything. 8/
https://x.com/jasonlk/status/1946069562723897802 (via @nixCraft )
Billionaires Convince Themselves AI Chatbots Are Close to Making New Scientific Discoveries

"I'm doing the equivalent of vibe coding, except it's vibe physics."

Gizmodo

A sketchy doctor put two people in the hospital in critical condition. But he's convinced it's not his fault, because "an artificial intelligence app" told him it wasn't. He has yet to realize: LLMs don't actually know anything. 10/

https://www.propublica.org/article/peptide-injections-raadfest-rfk-jr

LLMs don't actually know anything. 11/
https://writing.exchange/@Harlander/115063980995867850
Google AI Falsely Says YouTuber Visited Israel, Forcing Him to Deal With Backlash

YouTuber Benn Jordan has never been to Israel, but Google's AI summary said he'd visited and made a video about it. Then the backlash started.

404 Media
The perils of letting AI plan your next trip

An imagined town in Peru, an Eiffel tower in Beijing: travellers are increasingly using tools like ChatGPT for itinerary ideas – and being sent to destinations that don't exist.

BBC

LLMs especially don't know anything about the law. 14/

https://www.404media.co/18-lawyers-caught-using-ai-explain-why-they-did-it/

18 Lawyers Caught Using AI Explain Why They Did It

Lawyers blame IT, family emergencies, their own poor judgment, their assistants, illness, and more.

404 Media
LLMS continue to not know anything. 15/
And maybe 15 examples in isn't the best time to explain my point in this thread. But LLMs are statistical agglomerations of words. Words are all they have. They do not have experience or knowledge that for us is deeply integrated with our words. It's like a well-read virgin who has never been in a relationship or even had friends confidently setting up shop as a sex advice columnist. It's all words and only words, with no actual meaning to back it. But because LLMs simulate conversation, people, reasonably, keep mistaking statistically extruded text for something meaningful. 16/
Reddit's AI Suggests Users Try Heroin

AI-generated Reddit Answers are giving bad advice in medical subreddits and moderators can’t opt out.

404 Media
@williampietri Interesting assertion by a lawyer quoted in that 404media article, “ Westlaw Precision incorporates AI-assisted research, which can generate fictitious legal authority if not independently verified.” It seems to me that a more accurate statement would be: Westlaw Precision incorporates AI-assisted research, which can generate fictitious legal authority, and MUST be independently verified.

@ELS Totally! It's just wild to me how few people understand that these are, in the Frankfurt sense of the term, machines made to produce bullshit: "speech intended to persuade without regard for truth."

But on the other hand, looking around at the world, maybe I shouldn't be too surprised. The world's richest man and the president of the US are both bullshitters of the highest order.

@williampietri Also interesting that it is the lawyers who are being blamed rather than the AI programs that produced the fictitious cases, citations, and quotes. Why are the IA programs considered innocent? The companies that build them know that these programs make false assertions. It seems to me that the fact that these companies push these programs as “useful” amounts to a lie, i.e. a known falsehood stated with self-serving malicious intent.
@ELS @williampietri Yep. Charge them all with conspiracy to defraud.

@jef @ELS I'd love to see it! Although I think this is a case of that wonderful Tom Waits line, "The large print giveth and the small print taketh away." [1] I expect that they've adequately disclaimed themselves in a way that legally puts the responsibility on the user. Just one more benefit of rugged American individualism!

Plus I think the judges tend to keep their wrath to those who step into their courtroom, and it'd be hard for a lawyer to admit they got rooked by marketing fluff. Being rugged individualists, and all.

[1] Which has a great marketing-related story of its own: https://en.wikipedia.org/wiki/Step_Right_Up_(song)

Step Right Up (song) - Wikipedia

@williampietri @jef @ELS At a very basic level, they could all be done for negligence since it was foreseeable, you might be able to argue they had a duty of care to ensure accuracy that was breached, you can show the causal link, and there definitely was damage. (I also say this about the entire Google ad ecosystem which helps fund disinformation.)
@williampietri @nixCraft Assuming this is real, the situation is actually probably slightly worse than it would appear. That is, the chatbot is claiming to know just what it did and what the impact is, but there isn't any particular reason to think that's accurate.
@williampietri @nixCraft @fanf42 This just means that they were trained on event post-mortems, but don’t realize the idea is to avoid them. True/false, good/bad are meaningless
@williampietri don’t worry I’m sure they’ll human review it so that they don’t accidentally approve any life saving vaccines
@williampietri so glad a guy with what looks like untreated gingivitis and a nicotine pouch is making public health decisions
@williampietri I laughed so hard I followed you on Bluesky

@williampietri
"No Time to Refill:

The crew was focused on evacuating passengers and didn't have time to refill the pool before the ship sank."

@williampietri
My Google AI referenced this Quora answer: