It’s absurd and offensive to say that today’s AI techniques will soon produce AGI. That’s because Machine Learning (ML) is fundamentally the wrong tool for the job.

ML is just a collection of mathematical tools and practices for approximating functions. That is, just learning a mapping from inputs to outputs based on a vast number of examples, nothing more.

That means that the ML approach to AGI is literally trying to approximate the function that is the human mind. Which is bogus, because the mind is not a function. It's not just inputs and outputs! It continuously generates a chaotic swirl of experience, emotion, thought, and behavior, whether you’re sensing and acting or not.

To apply ML to the task is to say all that rich, inner depth either doesn’t matter, or can reliably be inferred from nothing but outward appearances. I don’t know about you, but I find both of those options ridiculous. It’s dehumanizing, and shows incredible hubris.

@brembs
#ai #ml #agi

@ngaylinn @brembs Don't kid yourself, your brain is just a blob of electric jello. Reductionism is foolish, there's still a lot we don't understand about how people make decisions and if you participate in commonly held activities like showering or eating breakfast then you're far more predictable than you think.

It's not absurd to claim the first AGI will be one of GPT-4's cousins, but I agree that it's arrogant to claim we've got the blueprint for AGI all mapped out.

@ngaylinn @brembs The problem with AI discourse and speculation on the emergence of super-human intelligence is that it will never be incontrovertible, it's impossible to prove that AGI actually is AGI because it's impossible to distinguish a living, thinking machine that draws on its experiences to form ideas, from a machine that just knows all the right answers to our questions.

Vis a vis solipsism, we either have to give machines the benefit of the doubt, or deny them any rights.

@ngaylinn @brembs Read Klara and the Sun for more food for thought on this matter.

@psboyce @brembs I think you misunderstood my post.

I'm not arguing that "the brain is too special and unpredictable for AI to reproduce." I'm making a much more specific technical argument: "the brain should be modeled as a process, not a function. This should be true of any AGI. Therefore, ML as a mathematical tool for approximating functions is not suitable to produce AGI."

Personally, I don't like the concept of AGI at all, but I am actually making a claim of what's necessary to produce AGI. It requires dynamic inner state. Something like a train of thought, an inner sense of purpose, or an unfolding model of self and reality. Without those things, what you have is fundamentally different from natural intelligence.

I'm arguing the way we design and build today's AI models definitely doesn't and can't have that quality. That's just not the sort of algorithm they run. I'm not saying mind-like computer programs are impossible, just that ML is not a suitable tool for creating them.

@ngaylinn @brembs No no, you misunderstood *my* post. I never claimed it was a good blueprint for AGI but your window of attention is far too short for that kind of nuance. That's all irrelevant anyway because you clearly don't understand GPT or deep learning either. I digress.

You COMPLETELY missed my broader point, which is that at a fundamental, philosophical level we have no way to distinguish human sapience from any other kind of sapience. That which behaves sufficiently human, is. Goodbye