Deep Ganguli heads up Anthropic's "societal impacts" team which claims to have the role to voice uncomfortable truths about the rollout of their tech. And his quote here is at the end of a long piece by @theverge.

Three things:

1. The machine doesn't have endless empathy. It has no empathy at all.
2. The machine doesn't think. It has never thought, nor will it think in the future.
3. The machine doesn't give advice, because it is incapable of conceiving of the advice it supposedly can give.

@theverge It is shocking to me that someone who *seems* like they would know what they're talking about when it comes to this topic, who appears to be an expert in the field, would be making such base category errors. Even if it's rhetorical shorthand, perhaps filtered through a journalist who isn't trying to be scientifically precise, it is still a gross perversion of language.

LLMs do not have empathy, do not think, and cannot give you advice. It is physically impossible for them to do so.

@theverge In a past era, you might hear someone say something along the lines of “I asked Google about XYZ and it gave me the answer ABC”

But we knew that was merely a casual colloquialism, and what they were really saying is they used the Google Search Engine to look up a topic, and based on the *human-authored* results they got back and read *as a human*, they were able to formulate a new *human* opinion.

You certainly don't *ever* want to hear a scientist be that lax in their terminology!

@theverge Again, for the folks in the back:

**The head of the "safe AI" company's societal impacts team** is claiming their LLM technology is empathetic, can give advice, and can think.

Is Deep Ganguli confused? Is he deluded? Is he in a tech bubble so bubbly he has lost touch with basic facts?

I'm serious. This is not a joke! Why do outlets like The Verge not ask basic follow-up questions like, oh I dunno: “Why do you claim a software algorithm is empathetic, can give advice, and can think?”

@jaredwhite @theverge "Asking basic followup questions" seems beyond the capabilities of an ALARMING number of journalists these days.
@Sadsquatch @jaredwhite @theverge I suspect they’re also afraid of being seen as insufficiently supportive of AI dogma. In other words: they’re not working in service of their readership.

@jaredwhite @theverge

It's religion.

Sounds like they are either experiencing or peddling a religious moment.

Also obvious product hype in the voice of concern:

"In fact, our pain medication is so powerful that soon there will no pain of any kind left in the world. But what will this moment mean for humanity?"

Pop Bubbles With Ms Rachel #msrachel #shorts #toddlerlearning #babylearning

YouTube

@jaredwhite @theverge For some purposes it hardly matters whether it is empathetic, as long as it acts as if it is. A related example is the study that used chatbots to reduce beliefs in conspiracy theories. The LLM is indifferent to the veracity of these, and yet the conversations were helpful:

https://mitsloan.mit.edu/ideas-made-to-matter/mit-study-ai-chatbot-can-reduce-belief-conspiracy-theories

MIT study: An AI chatbot can reduce belief in conspiracy theories | MIT Sloan

MIT Sloan
@jaredwhite @theverge Sounds like AI psychosis to me.