“I tend to think that most fears about A.I. are best understood as fears about capitalism”*…
Further to Wednesday‘s and yesterday‘s posts (on to other topics again after this, I promise), a powerful piece from Patrick Tanguay (in his always-illuminating Sentiers newsletter).
He begins with a consideration of Peter Wolfendale’s “Geist in the machine“
… Wolfendale argues that the current AI debate recapitulates an 18th-century conflict between mechanism and romanticism. On one side, naive rationalists (Yudkowsky, Bostrom, much of Silicon Valley) assume intelligence is ultimately reducible to calculation; throw enough computing power at the problem and the gap between human and machine closes. On the other, popular romantics (Bender, Noë, many artists) insist that something about human cognition, whether it’s embodiment, meaning, or consciousness, can never be mechanised. Wolfendale finds both positions insufficient. The rationalists reduce difficult choices to optimisation problems, while the romantics bundle distinct capacities into a single vague essence.
His alternative draws on Kant and Hegel. He separates what we loosely call the “soul” into three capacities: wisdom (the metacognitive ability to reformulate problems, not just solve them), creativity (the ability to invent new rules rather than search through existing ones), and autonomy (the capacity to question and revise our own motivations). Current AI systems show glimmers of the first two but lack the third entirely. Wolfendale treats autonomy as the defining feature of personhood: not a hidden essence steering action, but the ongoing process of asking who we want to be and revising our commitments accordingly. Following Hegel he calls this Geist, spirit as self-reflective freedom.
Wolfendale doesn’t ask whether machines can have souls; he argues we should build them, and that the greater risk lies in not doing so. Machines that handle all our meaningful choices without possessing genuine autonomy would sever us from the communities of mutual recognition through which we pursue truth, beauty, and justice. A perfectly optimised servant that satisfies our preferences while leaving us unchanged is, in his phrase, “a slave so abject it masters us.” Most philosophical treatments of AI consciousness end with a verdict on possibility. Wolfendale ends with an ethical imperative: freedom is best preserved by extending it.
I can’t say I agree, unless “we”… end up with a completely different relationship to our technology and capital. However, his argument all the way before then is a worthy reflection, and pairs well with the one below and another from issue No.387. I’m talking about Anil Seth’s The mythology of conscious AI, where he argues that consciousness probably requires biological life and that silicon-based AI is unlikely to achieve it. Seth maps the biological terrain that makes consciousness hard to replicate; Wolfendale maps the philosophical terrain that makes personhood worth pursuing anyway, on entirely different grounds. Seth ends where the interesting problem begins for Wolfendale: even if machines can’t be conscious, the question of whether they can be autonomous persons, capable of self-reflective revision, remains open:
Though GenAI systems can’t usually compete with human creatives on their own, they are increasingly being used as imaginative prosthetics. This symbiosis reveals that what distinguishes human creativity is not the precise range of heuristics embedded in our perceptual systems, but our metacognitive capacity to modulate and combine them in pursuit of novelty. What makes our imaginative processes conscious is our ability to self-consciously intervene in them, deliberately making unusual choices or drawing analogies between disparate tasks. And yet metacognition is nothing on its own. If reason demands revision, new rules must come from somewhere. […]
[Hubert Dreyfus] argues that the comparative robustness of human intelligence lies in our ability to navigate the relationships between factors and determine what matters in any practical situation. He claims that this wouldn’t be possible were it not for our bodies, which shape the range of actions we can perform, and our needs, which unify our various goals and projects into a structured framework. Dreyfus argues that, without bodies and needs, machines will never match us. […]
This is the basic link between self-determination and self-justification. For Hegel, to be free isn’t simply to be oneself – it isn’t enough to play by one’s own rules. We must also be responsive to error, ensuring not just that inconsistencies in our principles and practices are resolved, but that we build frameworks to hold one another mutually accountable. […]
Delegating all our choices to mere automatons risks alienating us from our sources of meaning. If we consume only media optimised for our personal preferences, generated by AIs with no preferences of their own, then we will cease to belong to aesthetic communities in which tastes are assessed, challenged and deepened. We will no longer see ourselves and one another as even passively involved in the pursuit of beauty. Without mutual recognition in science and civic life, we might as easily be estranged from truth and right – told how to think and act by anonymous machines rather than experts we hold to account…
Tanguay then turns to “The Prospect of Butlerian Jihad” by Liam Mullally, in which Mullally uses…
… Herbert’s Dune and the Butlerian Jihad [here] as a lens for what he sees as a growing anti-tech “structure of feeling” (Raymond Williams’s term): the diffuse public unease about AI, enshittification, surveillance, and tech oligarchs that has not yet solidified into coherent politics. The closest thing to a political expression so far is neo-Luddism, which Mullally credits for drawing attention to technological exploitation but finds insufficient. His concern is that the impulse to reject technology wholesale smuggles in essentialist assumptions about human nature, a romantic defence of “pure” humanity against the corruption of machines. He traces this logic back to Samuel Butler’s 1863 essay Darwin Among the Machines, which framed the human-technology relationship as a zero-sum contest for supremacy, and notes that Butler’s framing was “explicitly supremacist,” written from within colonial New Zealand and structured by the same logic of domination it claimed to resist.
The alternative Mullally proposes draws on Bernard Stiegler’s concept of “originary technicity”: the idea that human subjectivity has always been constituted in part by its tools, that there is no pre-technological human to defend. [see here] If that’s right, then opposing technology as such is an “ontological confusion,” a fight against something that is already part of what we are. The real problem is not machines but the economic logic that shapes their development and deployment. Mullally is clear-eyed about this: capital does not have total command over its technologies, and understanding how they work is a precondition for contesting them. He closes by arguing that the anti-tech structure of feeling is “there for the taking,” but only if it can be redirected. The fights ahead are between capital and whatever coalition can form against it, not between humanity and machines. Technology is a terrain in that conflict; abandoning it means losing before the contest begins.
Wolfendale’s Geist in the Machine above arrived at a parallel conclusion from a different direction: where Mullally argues that rejecting technology means defending a false vision of the human, Wolfendale argues that refusing to extend autonomy to machines risks severing us from the self-reflective freedom that makes us persons in the first place. Both reject the romantic position, but for different reasons:
To the extent that neo-Luddites bring critical attention to technology, they are doing useful work. But this anti-tech sentiment frequently cohabitates with something uneasy: the treatment of technology as some abstract and impenetrable evil, and the retreat, against this, into essentialist views of the human. […]
If “humanity” is not a thing-in-itself, but historically, socially and technically mutable, then the sphere of possibility of the human and of our world becomes much broader. Our relationship to the non-human — to technology or to nature — does not need to be one of control, domination and exploitation. […]
As calls for a fight back against technology grow, the left needs to carefully consider what it is advocating for. Are we fighting the exploitation of workers, the hollowing out of culture and the destruction of the earth via technology, or are we rallying in defence of false visions of pure, a-technical humanity? […]
The anti-tech structure of feeling is there for the taking. But if it is to lead anywhere, it must be taken carefully: a fightback against technological exploitation will be found not in the complete rejection of technology, but in the short-circuiting of one kind of technology and the development of another.
As Max Read (scroll down) observes:
… if we understand A.I. as a product of the systems that precede it, I think it’s fair to say ubiquitous A.I.-generated text is “inevitable” in the same way that high-volume blogs were “inevitable” or Facebook fake news pages were “inevitable”: Not because of some “natural” superiority or excellence, but because they follow so directly from the logic of the system out of which they emerge. In this sense A.I. is “inevitable” precisely because it’s not revolutionary…
The question isn’t if we want a relationship with technology; it’s what kind of relationship we want. We’ve always (at least since we’ve been a conscious species) co-existed with, and been shaped by, tools; we’ve always suffered the “friction” of technological transition as we innovate new tools. As yesterday’s post suggested (in its defense of the open web in the face on a voracious attack from powerful LLM companies), “what matters is power“… power to shape the relationship(s) we have with the technologies we use. That power is currently in the hands of a relatively few companies, all concerned above all else with harvesting as much money as they can from “uses” they design to amplify that engagement and ease that monetization. It doesn’t, of course, have to be this way.
We’ve lived under modern capitalism for only a few hundred years, and under the hyper-global, hyper-extractive regime we currently inhabit for only a century-and-a-half or so, during which time, in fits and starts, it has grown ever more rapcious. George Monbiot observed that “like coal, capitalism has brought many benefits. But, like coal, it now causes more harm than good.” And Ursula Le Guin, that “we live in capitalism. Its power seems inescapable. So did the divine right of kings.” In many countries, “divine right” monarchy has been replaced by “constitutional monarchy.” Perhaps it’s time for more of the world to consider “constitutional capitalism.” We could start by learning from the successes and failures of Scandinavia and Europe.
Social media, AI, quantum computing– on being clear as to the real issue: “Geist in the machine & The prospect of Butlerian Jihad,” from @inevernu.bsky.social.
(All this said, David Chalmers argues that there’s one possibility that might change everything: “Could a Large Language Model be Conscious?” On the other hand, the ARC Prize Foundation suggests, we have some time: a test they devised for benchmarking agentic intelligence recently found that “humans can solve 100% of the environments, in contrast to frontier AI systems which, as of March 2026, score below 1%”… :)
* Ted Chiang (gift article; see also here and here and here)
###
As we keep our eyes on the prize, we might spare a thought for a man who wrestled with a version of these same issues in the last century, Pierre Teilhard de Chardin; he died on this date in 1955. A Jesuit theologian, philosopher, geologist, and paleontologist, he conceived the idea of the Omega Point (a maximum level of complexity and consciousness towards which he believed the universe was evolving) and developed Vladimir Vernadsky‘s concept of noosphere. Teilhard took part in the discovery of Peking Man, and wrote on the reconciliation of faith and evolutionary theory. His thinking on both these fronts was censored during his lifetime by the Catholic Church (in particular for its implications for “original sin”); but in 2009, they lifted their ban.
#AI #artificialIntelligence #BernardStiegler #ButlerianJihad #capitalism #computing #consolidation #culture #Hegel #history #Kant #LiamMullally #noosphere #OmegaPoint #PatrickTanguay #PeterWolfendale #philosophy #PierreTeilhardDeChardin #socialMedia #Technology #TeilhardDeChardin





