'Meredith,' some guys ask, 'why won't you shove AI into Signal?'

Because we love privacy, and we love you, and this shit is predictable and unacceptable. Use Signal ❤️

@Mer__edith 🙏🏼

@bengo @Mer__edith What is really bothering is this article, trying to impose intentions on the chatbot, when it just generates probabilistic words. "chatbots try to negotiate their way out", "deflect attention from their mistakes", "tried to change the subject", "admitting when it didn’t know", "it admitted", "was caught out lying"...

None of this is true, or makes sense!

@tdelmas @bengo @Mer__edith

The urge to anthropomorphise stuff is really hard to resist. It's even worse when people actually think it's warranted.

@veronica @tdelmas @bengo @Mer__edith It's like non-sentient objects want to be anthropomorphised

@tdelmas @bengo @Mer__edith is saying 'the chatbot tried to deflect attention' anthropomorphizing I wonder?

It had no intention of doing it, because it doesn't have the intelligence to have intentions. But many people also deflect attention habitually, without intention.

In humans, it is often the habitual deflectors who are best at it. They don't hesitate or doubt and are very experienced.

I would think this is the same. Deflection is a built in/ trained in mechanism.

@wdeborger @bengo @Mer__edith "without intention", is misleading. It may be unconscious. But for generative AI, it's not even unconscious. It's not deflection for gen-AI, it can't be, because there is no purpose, conscious or not.

@tdelmas @bengo @Mer__edith Interesting.

Do I understand correctly that to you, deflection only applies to humans? It is not the effect or the damage that make it deflection, if not executed by a human? The ai can merely do a thing that is in its effects like deflection, but it cannot be called deflection? What should it be called?

I.e. your answer to the question can submarines swim would be no?

@wdeborger @tdelmas @bengo @Mer__edith it is the intentionality behind the action, I would say.
Also, submarines do not swim they sail.

@Laust @tdelmas thank you for your answer.

I would like to understand it better, as it is very strange to me (and I have no clear option yet).

When do you take intent into account? Can an ai do harm, can it kill, can it talk, can it reason? And what can have intent? Can a company, an animal, an object do any of those things?

@tdelmas haha yeah I had the same reaction to this article:

https://neuromatch.social/@elduvelle/114705509377109136

@bengo @davidho

El Duvelle (@[email protected])

This is obviously bad from #whatsapp, but also, the way the journalist describes what the chatbot does, as if it had intentions, is pretty bad too. >"It was the beginning of a bizarre exchange of the kind more and more people are having with AI systems, in which chatbots try to negotiate their way out of trouble, deflect attention from their mistakes and contradict themselves, all in an attempt to continue to appear useful." No, the chatbot isn't "trying to negotiate", and is not "attempting to appear useful". It's a program that follows programming rules to output something that looks like English langage. It doesn't have desires or intentions, and it cannot lie because it doesn't know what truth is. ‘It’s terrifying’: WhatsApp AI helper mistakenly shares user’s number https://www.theguardian.com/technology/2025/jun/18/whatsapp-ai-helper-mistakenly-shares-users-number?CMP=Share_AndroidApp_Other #genAI #ChatBot #TheGuardian

Neuromatch Social