For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".

They do literally only one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.

If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.

@thomasfuchs We don't know what makes one wake up in the morning and decide to climb a mountain or quit their job.
It may be some completely different process or there might be something to this pattern-matching statistical thing.
Do ants have agency? Do ant colonies?

We definitively must regulate the shit out of these big techs.
But saying that X does not do Y when both are poorly understood and defined is not the way, IMO.

@tambourineman We know exactly how LLMs work, at every stage, literally humans created them.

They don’t have consciousness, they don’t have agency. They’re not even physical systems, so there is no self to realize.

Just because we don’t understand brains doesn’t mean we don’t understand some algorithm and hardware implementation for it.

@thomasfuchs

Just because you build something doesn't mean you fully understand its implications. Emergent behavior exist, especially at this scale.
My point is that we don't need to get philosophical to criticize big tech.
They are destroying democracies, using our natural resources in a ponzi scheme that benefits very few at the detriment of billions, etc.
We have plenty of reasons for regulation already.

@tambourineman We obviously know that “X does not do Y” when it’s a machine, and we know exactly how it was programmed, and we know exactly what it’s doing. Everything about it is understood.

@OwlOnABicycle

Not really. Emergent and chaotic behaviors are a thing.
There's also the impracticality of probing inside such massive models.

But even if you fully understood the interactions of all the weights in those huge models, you still don't know how a brain works.
You cannot tell how it is not behaving.

But my point is that instead of trying to prove that models have no agency, which is complicated, we could blame the people that finance them, because we know for sure that they do.