For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".

They do literally only one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.

If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.

@thomasfuchs @WeirdWriter I really think that regulations should insist that LLMs software be configured to not refer to “itself” with personal pronouns, or imply it has emotional states, or all the other rhetorical tricks they have been programmed to use to appear “human”.

@michaelgemar @WeirdWriter Yes anthropomorphized chatbots should be illegal.

There’s plenty of other ways to interact with LLMs that don’t cause psychosis (for example autocomplete of whole sentences, something that can be useful for things like coding.)

@thomasfuchs Autocompleting whole sentences is just as bad. How do you know that sentence is what you wanted to write in the first place?

@elricofmelnibone @thomasfuchs Leading questions and other "soft" manipulation tactics.

Their presence is sufficient in the training datasets that they will do so, without any need for intentionality or agency.

(People are not very nice, one could say.)