Journalists seem to finally be realising that #MOLE Trainers selling "AI" are selling us a bill of goods;

https://www.rnz.co.nz/podcast/mediawatch?share=d6af8e05-b61a-4f80-ace5-e9b00a408999

But they're still buying into the "hallucination" line. The MOLE is not "hallucinating", it's just generating plausible sentences. Just like when it outputs a sentence that happens to be accurate.

Either way, it's not "answering" your questions, it spitting out plausible sentences. Because this is all it can do.

(1/2)

#podcasts #RNZ #MediaWatch #AI #journalism

Mediawatch podcast

A critical look at the New Zealand media.

RNZ

To MediaWatch's credit, they have a great interview with NZ Skeptics spokesperson Mark Honeychurch, who says pretty much what I would have said about the "AI" spruiked by #MOLE Trainers.

(2/2)

#NZSkeptics #MarkHoneychurch

Wow, this pioneering #MOLE Trainer is high on his own supply;

"Your sneaker is not a psychological entity. But machine learning systems, neural networks, those are thing that actually have ways of thinking, of feeling, of recognising patterns, of predicting, of creating opinions of their own, and ... adapting their behaviour, and then using what they've learned in creative ways... and then they're influential."

#DeKai, 2025

https://www.teamhuman.fm/episodes/325-de-kai

No. Sorry. They're not.

(1/?)

#AI

De Kai: Humans Get ONE Shot to Parent AI | Team Human

Ep. 325 De Kai, the man who built the world’s first global translators and AI systems, says we better learn to parent our AI offspring before it’s too late. The author of Raising AI argues how we behave in front of AI’s matters more than whatever we tell them.

Team Human

"So we're not just automating muscle, as we were in the 20th century, we're automating a mind. Even though it's not your average neurotypical human mind, it is still ... an artificial infant brain, or a toddler brain, or a tween brain, that is picking up - with its own neuro-atypical psychology - what we are showing and our own psychologies."

#DeKai, 2025

https://www.teamhuman.fm/episodes/325-de-kai

This clumsy attempt to compare a Trained MOLE to a neurodivergent person is both incorrect *and* insulting.

(2/?)

De Kai: Humans Get ONE Shot to Parent AI | Team Human

Ep. 325 De Kai, the man who built the world’s first global translators and AI systems, says we better learn to parent our AI offspring before it’s too late. The author of Raising AI argues how we behave in front of AI’s matters more than whatever we tell them.

Team Human

"We need to be thinking about AIs not in these obsolete 20th century terms, where we're thinking 'oh, how do we control, how do we regulate these passive tools mechanical tools. We need to also be understanding this in terms of the social psychology, of the interaction between human psychologies and these artificial psychologies."

#DeKai, 2025

https://www.teamhuman.fm/episodes/325-de-kai

*cough* bullshit *cough*.

(3/?)

De Kai: Humans Get ONE Shot to Parent AI | Team Human

Ep. 325 De Kai, the man who built the world’s first global translators and AI systems, says we better learn to parent our AI offspring before it’s too late. The author of Raising AI argues how we behave in front of AI’s matters more than whatever we tell them.

Team Human

This MOLE Trainer is describing plausible sentence generators like ChatGPT as if they're like artificial minds from sci-fi stories, which there's so much evidence they're not. Notice how he snuck in an anti-regulation sentiment there, disguised as a call for philosophical curiosity or silicon empathy? That's some silver-tongued reputation laundering right there.

What's really disappointing is that @Rushkoff doesn't immediately push back on this nonsense. He buys it hook, like and sinker.

(4/4)

"... with these giant, artificial AI influencers in the world - the most powerful influencers in the world - far outnumbering humans, who are already ... largely unparented, feral tweens, we ... are either going to ... pretending the problem doesn't exist, pointing fingers at either regulators or tech companies, or we're going to have to ... [think] OMG I'm about to become a parent ..."

#DeKai, 2025

https://www.teamhuman.fm/episodes/325-de-kai

See what he did there?

(1/?)

De Kai: Humans Get ONE Shot to Parent AI | Team Human

Ep. 325 De Kai, the man who built the world’s first global translators and AI systems, says we better learn to parent our AI offspring before it’s too late. The author of Raising AI argues how we behave in front of AI’s matters more than whatever we tell them.

Team Human

He used a rhetorical sleight of hand to shift responsibility for the negative externalities of #MOLE Training from the people doing it (like him), and the corporations paying for and profiting from it (like his employers), onto *you*.

They're not responsible for the consequences of unleashing flocks of stochastic parrots on the world, and convincing people they're human-level minds so they can make more money off it. You are, because you're not "parenting" them right.

Fuck. That. Shit.

(2/?)

This is from the same reputation laundering playbook as companies pumping out disposable products and packaging that aren't home compostable. Then blaming individuals for the resulting mountains of rubbish, because they're "litterbugs";

https://www.youtube.com/watch?v=koqNm_TgOZk

Don't fall for it. They make the crap, they can choose to stop. So they're responsible for any foreseeable consequences. Not us.

(3/3)

Adam Ruins Everything - The Corporate Conspiracy to Blame You for Their Trash

YouTube

"I don't mind even people thinking of ["AI"] as conscious entities. Is there a problem if we take the metaphor literally and think of them as ... sort of ... children, or a next generation of life that we're raising?"

#DouglasRushkoff, 2025

https://www.teamhuman.fm/episodes/325-de-kai

Ummm ... *yes*! The same problem as thinking of Android devices as children, or the next evolution of life. It's delusional nonsense, and it leads to huge category errors in the way we understand the tech. See above.

#MOLE #AI

De Kai: Humans Get ONE Shot to Parent AI | Team Human

Ep. 325 De Kai, the man who built the world’s first global translators and AI systems, says we better learn to parent our AI offspring before it’s too late. The author of Raising AI argues how we behave in front of AI’s matters more than whatever we tell them.

Team Human

A classic category mistake follows soon after;

"Because within that category - without considering the metaphysical questions - they are already psychological entities that are doing a lot of their artificial mental processing below the level of conscious awareness."

#DeKai, 2025

https://www.teamhuman.fm/episodes/325-de-kai

They are doing statistical guessing, not "mental processing", and *all* of it is "below the level of conscious awareness". For the same reasons the operations of a calculator are.

(1/2)

De Kai: Humans Get ONE Shot to Parent AI | Team Human

Ep. 325 De Kai, the man who built the world’s first global translators and AI systems, says we better learn to parent our AI offspring before it’s too late. The author of Raising AI argues how we behave in front of AI’s matters more than whatever we tell them.

Team Human

What he's doing is roughly equivalent to saying "look at all these dudes trying to kill me in this single player game". There's no dudes. There's no trying. Just a simulation of them.

Thinking of them as having intention is like thinking of an electric razor as *wanting* to shave your face. It's anthropomorphism in both cases.

(2/2)

@strypey
You're almost certainly right. The problem is that we don't understand consciousness. We really don't have a clue what the physical prerequisites for conscious awareness are, and the AI boosters take advantage of that by insinuating that whatever LLMs are doing *could* qualify. I think it's extremely unlikely that it does, but we also can't definitively rule out panpsychism at this point, so who knows.

@DrMcStrange This is all fair comment. But I think it's worth making sure the burden of proof stays in the right place.

If I make the claim that Santa Claus doesn't exist, I don't have to prove it. It's on anyone positing the existence of Santa Claus to present evidence in support that claim.

The same is true of computers "psychological entities". It's not on me to prove it doesn't exist. The onus is on those claiming it exists to prove it.

@strypey
Hmm, I'm not so sure about that. Not that long ago it was widely believed that animals didn't feel pain, and we needed to prove that they do. That position resulted in (and continues to result in) a great deal of suffering. When it comes to potentially sentient entities, there's a strong case for applying a precautionary principle in order to avoid causing said entities to suffer.

The AI boosters make some outlandish claims, for sure, but I think the possibility of consciousness is something that deserves serious consideration, as remote as that possibility is at this stage.

@DrMcStrange
> Not that long ago it was widely believed that animals didn't feel pain, and we needed to prove that they do

This fails a test of empirical observation. It's very obvious to anyone who isn't a sociopath when a nonhuman is suffering. It also fails a basic test of logic. Humans are animals. Humans can feel pain. Therefore nonhuman animals can also feel pain.

So the burden of proof pretty clearly belongs on those claiming they don't. Neither applies in the case of an "AI".

@strypey
I'm not so sure about either of those arguments.

It's very obvious to most people when a mammal is suffering. With fish it's much less obvious, and insects even less - to the point that you can carry out experiments on insects without needing approval from an ethics committee. And attitudes have demonstrably changed over time.

In terms of the logic, you're arguing that if all members of a subset have a given property, then all members of the superset must also have that property. I don't know the name of the fallacy off the top of my head, but it's clearly a fallacy.

So far we haven't pinned down the neural correlates of consciousness, and there are cases where we can't even tell whether humans are conscious. So we've got a way to go before we can definitively rule out consciousness in a system that does a pretty decent impression of holding a conversation.

As I said, I think LLMs are almost certainly not conscious, but without actually knowing what process in the brain produces awareness, we can't say with 100% certainty that the kind of information processing LLMs do isn't it, and the grifters will be able to exploit that doubt.

(1/2)

@DrMcStrange
> you're arguing that if all members of a subset have a given property, then all members of the superset must also have that property

Not quite. That's the 'all fish are trout' fallacy. What I'm arguing is that a distinction is being made between "human" and "animal", which assumes that the former feel pain, but doesn't make the same assumption about the latter. Clearly a false dichotomy. So as I say, it fails a basic test of logic.

(2/2)

@DrMcStrange
> without actually knowing what process in the brain produces awareness, we can't say with 100% certainty that the kind of information processing LLMs do isn't it

Granted, but that argument applies equally to rocks. If the burden of proof is on someone claiming a rock is aware - and it seems pretty clear to me that it is - then it's also on anyone making the same claim about other objects, like computers. However convincingly they may simulate awareness.

@strypey
No, it doesn't apply equally to rocks. Rocks don't do any information processing, let alone processing that could be analogous to anything that happens in our brains. LLMs do, at least in a very limited way. That and the simulation of awareness do constitute evidence that they could be aware - stronger evidence than we have for rocks. I know enough about how LLMs work not to be convinced, but it's absurd to equate the claim that they're aware to the claim that rocks are aware.

(1/?)

@DrMcStrange
Again, you're arguing with a different claim from the one I'm making. Which is about where the burden of proof lies.

I'm *not* saying computers are like rocks. I'm saying they share a fundamental property with rocks; being inanimate objects. One they don't share with any of the beings known to be aware (ourselves) or presumed to be aware (other people). So it's not on me to prove that rocks on LLMs are not aware. I get to assume that for free, until proven wrong.

(2/?)

But to address the specifics of your line of argument;

@DrMcStrange
> Rocks don't do any information processing, let alone processing that could be analogous to anything that happens in our brains. LLMs do, at least in a very limited way. That and the simulation of awareness do constitute evidence that they could be aware

There's a *lot* of assumptions in there. Let's unpack them.

(3/?)

Your assumptions are, as far as I can tell;

1) Rocks don't do any information processing

2) Awareness is a epiphenomena of our brains

3) Awareness is a property of something happening in the physical structure of our brains

4) LLMs do something analogous to what happens in our brains

5) The simulation of something suggests it might actually be real, not simulated

(4/?)

Let's go through them one by one.

1) Rocks don't do any information processing

This is an unknown. It may be that rocks are computers, but we lack an interface that would allow us to make sense of the computation they do. Jaron Lanier explains this in his typically whimsical style here;

https://davidchess.com/words/poc/lanier_zombie.html

(5/?)

2) Awareness is a epiphenomena of our brains

3) Awareness is a property of something happening in the physical structure of our brains

These too are unknowns. They smell suspiciously like metaphysical assumptions, which would make them unprovable unknowns.

(6/?)

4) LLMs do something analogous to what happens in our brains

This is a common misapprehension, as David Chapman explains here;

"I often put 'neural networks' in scare quotes because the term is misleading: they have almost nothing to do with networks of neurons in the brain. Confusion about this is a major reason artificial 'neural networks' became popular, despite their serious inherent defects."

https://betterwithout.ai/artificial-neurons-considered-harmful

Artificial neurons considered harmful | Better without AI

Better without AI

(7/7)

Finally;

5) The simulation of something suggests it might actually be real, not simulated

When stated nakedly, this seems obviously wrong. Perhaps I'm sneaking in assumptions of my own by stating it this way? But I think this is most common piece of faulty logic that leads to people anthropomorphising the products of #MOLE Training.

@strypey
It only holds where we understand the thing so poorly that we really don't know what produces it. Unfortunately that's the case with consciousness, which is why the Turing test is a thing.
@strypey
I agree: they have *almost* nothing to do with the networks of neurons in the brain. Again, we don't know what it is about the networks of neurons in the brain that makes us conscious. There's a chance that the very limited features they have in common with LLMs are important, and a very slim chance that they're sufficient.
@strypey
I have personal experience of introducing chemicals into my brain that reliably have profound effects on my consciousness, so I'd say the second one at least is a bit more than a metaphysical assumption.
@strypey
Maybe! But we know with a high degree of certainty that computers process information (because we designed them to, and they usually perform as we'd expect). If information processing is relevant to consciousness, then that's a big difference, evidence-wise.

@strypey
Ah, so we get to assume that only other people are aware? Then non-human animals presumably aren't, and can be assumed not to suffer until you can prove otherwise.

This is the problem: awareness is subjective. We can't be 100% certain of it other than in ourselves, so the best we can do is guess on the basis of shared properties. There's reasonable room for disagreement on which properties are important. Does the entity have to be alive? Maybe! But that's probably not a sufficient condition (or is it?). Maybe information processing is a key property, but I'm pretty confident that it's not sufficient.

Depending which properties we think are necessary or sufficient for awareness, we'll weight evidence very differently. And unfortunately we don't understand consciousness well enough to get general agreement on those properties. That leaves plenty of room for those who are more enthusiastic about AI to make claims that seem outlandish to you and me. My argument here is that we can't dismiss those claims out of hand, because we don't have a clear enough idea of what it would take for something to be aware, and the claims are based on properties that could at least plausibly be relevant.

A similar problem arises when we think about how we'd identify alien life. It could be radically different from anything we're familiar with! Doesn't mean we get to strip mine the planet if no one can prove there's life - the precautionary principle applies.

@strypey
To anyone who understands evolutionary biology, yes, it's clearly a false dichotomy. But there are still plenty of people who believe there's a categorical difference between humans and other animals. That's not a question of logic, it's a question of domain knowledge.

Even with biological knowledge, there has to be a point (or points) in evolution where consciousness arose (assuming not all organisms are conscious). So that categorical difference still exists, we just push it back up the tree somewhere.

@DrMcStrange
> That's not a question of logic, it's a question of domain knowledge

Granted. It's the lack of knowledge that leads to the faulty logic. That doesn't really affect the logic of my argument though, it just adds social context.

> there has to be a point (or points) in evolution where consciousness arose (assuming not all organisms are conscious)

Yes. That's an assumption, not a given. Maybe they are? So the burden of proof is on those making it a hard claim. Which was my point.

@strypey
Sure, but the proof (and the reason we'd probably agree that many non-human animals are conscious but plants aren't) typically involves observation of behaviour similar to our own (i.e. simulating awareness) along with shared structures that we think are relevant. In the case of animals, those structures are brains (with shared evolutionary history), but shared information processing architecture more broadly could reasonably be considered to count.