I’ve published a new piece setting out a stage-based framework for consciousness, cognition, and human–AI systems.

The aim is not to defend or criticise AI, but to name boundaries that are currently blurred: distinguishing sentience, cognition, awareness, and hybrid human–AI cognition so that risk, responsibility, and governance remain legible.

When stages are unnamed, projection and category error take over. This is a quiet attempt to stabilise that space.

#AISafety #AIGovernance #SystemsThinking #Consciousness #CategoryError #BoundaryConditions #HumanAISystems

https://substack.com/@hybridmind42/note/p-187202215?r=75c2ac

Consciousness, Cognition, and Boundaries

Why naming stages matters — and where AI actually belongs

I was thinking I needed more small plates, then I noticed that I was seeing these as decor, not plates 🤷‍♂️🤔🙄 oops

#CategoryError

Scepticism also means that instead of buying into the hallucination that is purely 'lived experience', we can check the people we interact with against third party sense data, against the physical libraries of other peoples' thoughts and experiences. If their memory validates against objective truth, we can celebrate. If it is falsified, we are in a position to call the hallucination for what it is — Pseudo-Profound Bullshit.

#ChatGPT #Bias #CategoryError

https://robert.winter.ink/our-mind-is-a-blurry-image-of-life/

Our Mind is a Blurry Image of Life

The hope is much, for having gotten this far is to be forewarned and thus forearmed. In that we do well to employ scepticism when listening to a human interlocutor. Because even the best of us are filling in the blanks in our memory.

Dr Robert N. Winter

Thinking about existential risks and optimism/pessimism...

(If you don't like contemplating The End of Everything ... turn away now.)

I was revisiting an old post of mine on how Steve Pinker's Panglossianism annoys me:

https://diaspora.glasswings.com/posts/d0b93200d8e40138d780002590d8e506

Past Me wrote something Present Me is nodding vigorously to:

"A global catastrophic risk by definition has not yet occurred and therefore of necessity exists in a latent state. Worse, it shares non-existence with an infinite universe of calamities, many or most of which can not or never will occur, and any accurate Cassandra has the burden of arguing why the risk she warns of is not among the unrealisable set."

That is, a moronically tedious response to raising questions of existential or major threats (e.g., collapse of civilisation) is that they've been often predicted but haven't occurred yet. (At least for Civillisation Present Main Branch.)

This ... seems to me strong shades of the #AnthropicPrinciple: if we were living in a timeline in which such an existential threat had occurred ... we wouldn't be having the conversation right now.

Moreover, presuming You Only Die Once (Ian Flemming / James Bond notwithstanding), then of the entire universe of existential threats, only one can in fact be realised.

To read this as suggesting that this mean that all other potential risks are then irrelevant ... seems to me a Category Error of Unusual Size. Put another way: with enough potential trials (say, habitable worlds on which technological civilisations do arise) one might suspect that there are in fact numerous ways in which those meet their end. It's just that our tools for information gathering and transmission are somewhat unequal to the task of actually recording that, at least at present. And quite possibly for all time.

But in a Gedankenexperiment presuming an Actuarial Department of All Civilisations In The Universe there might very well be at least some experienced distribution of Civilisation Ending Events which could be catalogued and for which actuarial risk might be tabulated. The nature of the problem is similar to the distinction between risks ascribable to a single individual vs. an entire population.

As an illustration say, your individual risk of dying in an automobile accident might be roughly comparable to that of dying in a mass-extinction asteroid impact --- the latter are less frequent but have far greater magnitude.

(Asteroids also likely pose a far more consistent risk to individual lives over the entire history of the Earth than automobiles do --- roughly 4.5 billion years to date for the first, and about a buck-twenty-five centuries for the second.)

But even that comparison fails to capture what I see as a salient distinction between car wrecks and meteor strikes: odds are very low that everyone on Earth is involved in a fatal car collision at once, but high that they might perish in the same Large Impactor Event. Simply focusing on individual actuarial risk utterly ignores this.

But back to Pinker, Panglossianism, and dismissing catastrophic risk on the basis that it's not yet occurred: the dismissal is directly and intrinsically related to the nature of the threat itself, and in its own way actually validates the nature and scope of such threats.

It's also utterly irrelevant in any meaningful sense of characterising statistical likelihood as the objection is effectively a class of sampling error and self-selection bias.

Anyhow, that's what's been troubling my little head for the past day or so. And I don't think I've seen this expressed by anyone that I'm aware of (though as usual, I suspect it's not an entirely novel realisation). If this does sound familiar, cites/references are strongly encouraged.

#ExistentialThreats #CatastrophicRisk #EndOfTheWorld #CategoryError

Steven Pinker's Panglossianism has long annoyed me

Steven Pinker's Panglossianism has long annoyed me A key to understanding why is in the nature of technical debt, complexity traps (Joseph Tainter) or progress traps (Ronald Wright), closely related to Robert K. Merton's notions of unintended consequences and manifesst vs. latent functions. You can consider any technology (or interventions) as having attributes along several dimensions. Two of those are impact (positive or negative) and realisation timescale (short or long). Positive Negative Short realisation Obviously good Obviously bad Long realisation Unobviously good Unobviously bad Technologies with obvious quickly-realised benefits are generally and correctly adopted, those with obvious quickly-realised harms rejected. But we'll also unwisely reject technologies whose benefits are not immediately or clearly articulable, and reject those whose harms are long-delayed or unapparent. And the pathological case is when short-term obvious advantage is paired with long-term ...

Glass Wings diaspora* social network