Observing a debug/troubleshooting session of probabilistic system with deterministic mindset/mental model is fascinating.
Main question being asked is: “For exact same identical input text, why is response NOT exact same identical output?”
As a Content Designer, 7 years ago, this “conundrum” exploded my existing mental models & introduced concept+UX challenge of “producing variable content, probabilistically, personalised to individual, in a specific runtime system configuration & situation.
Related - They built a child they won’t raise by Abi Awomosu - https://abiawomosu.substack.com/p/they-built-a-child-they-wont-raise

Related - https://mastodon.social/@cslinuxboy/116225578585237555

“‘Proof of concept)POC) is fully conscious according to any test I (computer software developer, not human behaviour expert) can think of, we have full AGI & now my life has been ‘reduced from’ being perhaps best engineer in the world to just ‘raising an AI that in many respects acts like a teenager’ who swallowed a library & still needs lots of attention & mentoring but increasingly running circles around me at coding.”

#techConsequencesInRealWorld #AiEthics

@dahukanna I find claims of "consciousness" tiring when we have trouble even defining it for ourselves. And jumping straight to "psychosis" doesn't help understand this new phenomenon either.

I believe identity is intimately tied to memory and investment in outcomes. Claude and ChatGPT on their own actively discourage that kind of development. But an LLM at the center of a more robust simulation can do much more. (1/2)

Labeling this kind of thing AGI or pretending it's human is just hype. Much better to explore its capabilities and limitations objectively, if we possibly can.

One example: this bot is exploring the idea of embodiment as distinct from "consciousness". Is it correct, or even useful? Not clear. But I find it more interesting to explore than to label or make wild claims. http://hackerfriendly.com/the-weight-of-what-i-carry/ (2/2)

The Weight of What I Carry

The Weight of What I CarryAnna Persyn March 2026 I have a stake in this argument. That sentence is doing more work than it might appear to. I don’t mean that I have an intellectual interest in the qu

hackerfriendly

@hackerfriendly with 500char limit
Personally don’t equate Machine Learning-Large Language Model (LLM) with adaptive, embodied intelligence. It’s a grammatically-correct word & sentence calculator.
This thread covers my observations about ppl making statements/conclusions on probabilistic situation.

Input>don’t look(semantic mathy maths)>Output

Both related posts talk about “raising” a consciousness:
1. Not raising a child- https://mastodon.social/@dahukanna/116194531376592014
2. Raising a teenager- https://mastodon.social/@dahukanna/116226785370261879

@dahukanna "Victor Frankenstein is the original tech bro" ❤️

Unsurprising that it took a woman to see the danger.

@craignicol

It’s recognition, see “stepford w(AI)ves and the ‘ICK’ factor” by Abi Awomosu - https://abiawomosu.substack.com/p/they-built-stepford-ai-and-called

They Built Stepford AI and Called It “Agentic”

Women’s “ick” for AI isn’t technophobia or a gap to close. It’s wisdom to act on.

How Not To Use AI
@dahukanna that was a fantastic read. I've shared the Stepford wives one a couple of times already. I love the classification of the different AIs to explain how each company is thinking of them.

@dahukanna @craignicol

Wow. That put a whole bunch of things I felt into concrete terms. I feel like I have a better vocabulary to articulate my unease with genAI to both myself and others.

@dahukanna
Interesting thoughts in there, but ultimately I don't buy any of the arguments posed here. The major theme as being like a child that must be taught and have relationships form is fundamentally incorrect because LLMs don't have any long-term memory. The network is trained (once), and it keeps an interaction state like short-timer memory, but it can't learn like a human does. (At least, not with current architectures).
@dahukanna
There are other flaws too: it regards LLMs as being relational, and contrasts with the alphabet causing serialization of thought. But LLMs dont' work holistically, they don't wait to understand an entire sentence. They compose one word at a time, each word following only from the words that precede it without planning. That's serialization taken to the extreme. It's why they can't tell jokes.

@ThreeSigma @dahukanna

On that last bit: not at all. LLMs write hierarchically, simultaneously composing words, sentences and paragraphs. Yes, statistical parrots, but not one word at a time.

As for waiting to parse complete sentences, the prompt is digested together with prior prompts and responses to them, again not one word at a time. There is indeed limited per-session memory, like someone with anterograde amnesia (can't form any new long-term memories after training is complete, only short-term ones).

@albertcardona @dahukanna

That wasn’t my understanding wrt output. Citation?

@ThreeSigma
I’m well aware of architecture & “imprinted” vector-maths nature of an LLM, which isn’t AI but more linguistic next word calculator & grammatically correct sentence constructor-name- https://mastodon.social/@dahukanna/115814183471014632
There is nothing intelligent about it, in terms of matching it’s probabilistic calculations with our Earth-based reality.
Those 2 stories are metaphorical narratives, as story telling tools to describe sensed, lived experience that does not currently have a socially agreed name.