For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".

They do literally only one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.

If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.

@thomasfuchs We don't know what makes one wake up in the morning and decide to climb a mountain or quit their job.
It may be some completely different process or there might be something to this pattern-matching statistical thing.
Do ants have agency? Do ant colonies?

We definitively must regulate the shit out of these big techs.
But saying that X does not do Y when both are poorly understood and defined is not the way, IMO.

@tambourineman We obviously know that “X does not do Y” when it’s a machine, and we know exactly how it was programmed, and we know exactly what it’s doing. Everything about it is understood.

@OwlOnABicycle

Not really. Emergent and chaotic behaviors are a thing.
There's also the impracticality of probing inside such massive models.

But even if you fully understood the interactions of all the weights in those huge models, you still don't know how a brain works.
You cannot tell how it is not behaving.

But my point is that instead of trying to prove that models have no agency, which is complicated, we could blame the people that finance them, because we know for sure that they do.