"Cognitive task" is an ontological sleight-of-hand used to obscure the distinction between the way a human would perform the task, and the nature of the task itself. 🧵👇
This mask is then used to conflate human cognition with what neural networks do, when in fact neural networks only work similarly to a small subset of animal cognition.
For example, doing arithmetic is a "cognitive task" for humans, but nobody (or very few) would argue that a calculator doing the same arithmetic is using cognition to do so.

The thing is, animal cognition is inextricably an embodied process. Affect is not a side-effect of cognition but its root.

The fact that we have computerised the production of plausibly similar outputs as those from animal cognition only means that we anthropomorphise the process that produces those plausible outputs.

We wrongly assign intention and goals to AI models like LLMs because we incorrectly assume the nature of their insides based on their outsides.

It is meaningless to talk of AI goals or intent, or at least meaningless to think of them as in any way isomorphic to animal goals or intent, as the mechanism for the production of goals and intent fundamentally does not exist in AI models.

This false theory of cognition is extremely dangerous, because it leads us to waste time on fallacies like AGI/superintelligence wiping out humanity through some misplaced intent + agency.

In reality the risk is both more proximate and more mundane than that, and is the same risk that has been playing out for at least hundreds of years.

We have repeatedly demonstrated our willingness to deploy technologies whose socioeconomic impact we do not understand and cannot forecast, in order to obtain a profit.

The AI apocalypse looks much more like an accelerated runaway-IT problem: replacing components of complex socioeconomic infrastructure (that might have previously been driven by people or technology) with AI will cause massive damage.

This damage will come from the unpredictable failure modes of systems that depend on certain kinds of AI, that in a context of complexity will cause harmful ripple effects.

The damage will be exacerbated by (1) the continued substitution of software for people in decision-making where there is an incentive to delegate accountability to a system that can't be questioned, and (2) the proliferation of software problems that are impossible to diagnose and impossible to fix.

The good news about this understanding of the AI apocalypse is that we are not fighting against an emergent superior machine intelligence. We are only fighting the dumbest, greediest instincts our human society produces. And that is something we know how to do.

Happy weekend!

I stuck a one-page version of this on the web over here, in case anyone wants it https://exitmusic.world/ai-researchers-wrong-theory-of-cognition-is-making-us-worry-about-the-wrong
AI researchers' wrong theory of cognition is making us worry about the wrong kind of AI apocalypse

I originally wrote this (in 18 minutes!) as a stream-of-consciousness Mastodon thread. Thought it might be worth putting it all together ...

Exit Music (notes)

@james today I just wanted a plain proper looking calendar app for my phone (wtf google) and they were ALL "ai created"

... pass.....

@james Nothing good will come from AI.

@james Put another way, as my friend Jason would hae it: we believe in LLMs for the same reason we see Jesus's face in a piece of toast.

And just as a quick double-check, I trust that you do not mean to distinguish "animal cognition" from "human cognition". I kinda assumed that, but then I was suddenly worried.

@james Yes, I have many similar thoughts these days, so this thread really resonated. We fundamentally don't know how the myriad elements of our living days coordinate themselves, but we have a particular conceptual cognitive function that claims to explain events, and takes credit for a causal relationship to them. That function is wrong to claim that it understands what's going on, and has less influence on outcomes than its model insists. Current AI confidently mimics that function, which our corresponding function interprets equally as producing outcomes: however, the AI is only producing outputs in that realm of conceptual jumble, that overlay of perceived meaning. Unfortunately, that's also the function that has been thoroughly shaped, boosted and hacked by society and corporations, and thus underlies those things, so tinkering with it at scale is hazardous.

@sanityinc yes!

I mean we barely even understand what “understand” means (to aim a friendly jab at John Vervaeke)

@james And also, the common thread across spiritual traditions is that liberation is isomorphic to the total relinquishment of knowing, where it turns out life goes on just fine without making conceptual thought primary. Adding a big thought tumble dryer is quite the distraction.