Observing a debug/troubleshooting session of probabilistic system with deterministic mindset/mental model is fascinating.
Main question being asked is: “For exact same identical input text, why is response NOT exact same identical output?”
As a Content Designer, 7 years ago, this “conundrum” exploded my existing mental models & introduced concept+UX challenge of “producing variable content, probabilistically, personalised to individual, in a specific runtime system configuration & situation.
Related - They built a child they won’t raise by Abi Awomosu - https://abiawomosu.substack.com/p/they-built-a-child-they-wont-raise
@dahukanna
Interesting thoughts in there, but ultimately I don't buy any of the arguments posed here. The major theme as being like a child that must be taught and have relationships form is fundamentally incorrect because LLMs don't have any long-term memory. The network is trained (once), and it keeps an interaction state like short-timer memory, but it can't learn like a human does. (At least, not with current architectures).
@ThreeSigma
I’m well aware of architecture & “imprinted” vector-maths nature of an LLM, which isn’t AI but more linguistic next word calculator & grammatically correct sentence constructor-name- https://mastodon.social/@dahukanna/115814183471014632
There is nothing intelligent about it, in terms of matching it’s probabilistic calculations with our Earth-based reality.
Those 2 stories are metaphorical narratives, as story telling tools to describe sensed, lived experience that does not currently have a socially agreed name.