A potentially interesting question: how much would the appearance of sentience or intelligence that LLMs can generate for some users explode if they were forced to have deterministic output?

In principle you could add a single "freeze the random seed" toggle to any of the major chatbots, and with that setting toggled on they would always return precisely the same output for a given input. Organisms and by extension humans cannot behave like this---no matter how stereotyped an organism's response may seem, it always differs, in however small a way, from a previous response---and the LLM's illusion should immediately be obvious by contrast. But, perhaps more interestingly for the folks who do think LLMs exhibit some form of sentience or intelligence: are we really meant to believe that a random number generator is the source of sentience or intelligence? You could hook up a random number generator to a machine that is otherwise deterministic and clearly not sentient or intelligent, and it suddenly becomes so? How do you explain that?

#AI #GenAI #GenerativeAI #LLMs
@abucci There's no such thing as consciousness. It's an indefinable concept. And determinism exists everywhere except in mathematics. Based on these assumptions, you know that every process in the real world is an approximation. So the problem lies precisely in this approximation.