It’s absurd and offensive to say that today’s AI techniques will soon produce AGI. That’s because Machine Learning (ML) is fundamentally the wrong tool for the job.

ML is just a collection of mathematical tools and practices for approximating functions. That is, just learning a mapping from inputs to outputs based on a vast number of examples, nothing more.

That means that the ML approach to AGI is literally trying to approximate the function that is the human mind. Which is bogus, because the mind is not a function. It's not just inputs and outputs! It continuously generates a chaotic swirl of experience, emotion, thought, and behavior, whether you’re sensing and acting or not.

To apply ML to the task is to say all that rich, inner depth either doesn’t matter, or can reliably be inferred from nothing but outward appearances. I don’t know about you, but I find both of those options ridiculous. It’s dehumanizing, and shows incredible hubris.

@brembs
#ai #ml #agi

@ngaylinn @brembs Don't kid yourself, your brain is just a blob of electric jello. Reductionism is foolish, there's still a lot we don't understand about how people make decisions and if you participate in commonly held activities like showering or eating breakfast then you're far more predictable than you think.

It's not absurd to claim the first AGI will be one of GPT-4's cousins, but I agree that it's arrogant to claim we've got the blueprint for AGI all mapped out.

@ngaylinn @brembs The problem with AI discourse and speculation on the emergence of super-human intelligence is that it will never be incontrovertible, it's impossible to prove that AGI actually is AGI because it's impossible to distinguish a living, thinking machine that draws on its experiences to form ideas, from a machine that just knows all the right answers to our questions.

Vis a vis solipsism, we either have to give machines the benefit of the doubt, or deny them any rights.

@ngaylinn @brembs Read Klara and the Sun for more food for thought on this matter.

@psboyce @brembs I think you misunderstood my post.

I'm not arguing that "the brain is too special and unpredictable for AI to reproduce." I'm making a much more specific technical argument: "the brain should be modeled as a process, not a function. This should be true of any AGI. Therefore, ML as a mathematical tool for approximating functions is not suitable to produce AGI."

Personally, I don't like the concept of AGI at all, but I am actually making a claim of what's necessary to produce AGI. It requires dynamic inner state. Something like a train of thought, an inner sense of purpose, or an unfolding model of self and reality. Without those things, what you have is fundamentally different from natural intelligence.

I'm arguing the way we design and build today's AI models definitely doesn't and can't have that quality. That's just not the sort of algorithm they run. I'm not saying mind-like computer programs are impossible, just that ML is not a suitable tool for creating them.

@ngaylinn @psboyce @brembs

Agree with this - esp. the need for dynamic internal state + recursive goals.

This is definitely possible and present in ML. For example, MCTS is a process with variable internal memory. It exists outside the ML framework, though, which is limiting (and is maybe a corollary to the point you were making.)

Also, flow and diffusion models aren't quite (conceptually) functions either -- they are SDE / ODE algorithms u(or processes.)

@m8ta @psboyce @brembs That's a good point! People are certainly doing some cool and different things with neural networks these days. I was talking about ML specifically, but honestly I have no idea what the limits of what we can build from neural networks with our own creativity. Certainly things that are different from any kind of intelligence known before! That's exciting / terrifying.

I'm very skeptical of our ability to reverse engineer human-like cognition, though. I don't think we even properly understand what it is yet. The appeal of the ML approach is that we don't actually have to specify how intelligence works, we hope the machine will figure that out for us!

@ngaylinn @psboyce @brembs
Exciting and terrifying indeed - new ways of moving / pumping / compressing / generating information!

> The appeal of the ML approach is that we don't actually have to specify how intelligence works, we hope the machine will figure that out for us!

Does this mean that if we have an architecture that subsumes the brain's (which, agreed, we don't really understand), given enough data (which we have: the internet), you can train it to figure out what intelligence is?

@ngaylinn @psboyce @brembs

To answer my semi-rhetorical question: I think not (and I suspect you'll agree?). Backprop is just not quite powerful enough adding and removing dependencies to computational graphs. The scientific method, however, is...

@m8ta @ngaylinn @brembs I agree, though my reasoning is different. Statistical models can have up to two purposes; to explain, and to predict.

Deep learning models such as ChatGPT are extremely good at prediction (e.g. what is the next word in this ___) but extremely bad at explaining. Suppose we built a super intelligent model; it would make shockingly human-like responses, but wouldn't be able to explain why. It's sort of like how I can't explain how I form words using my vocal cords.

@m8ta @ngaylinn @brembs More commentary while it's fresh in my head;

ML is in a stage that's similar to microcomputers in the late 70's, the technology to make it possible has arrived but we're still figuring out what it's good at, how to best use it, and what the best interface for it is. As a side-effect it's largely inaccessible to people who aren't closely associated with STEM, but it likely would make the most meaningful impact to industries outside STEM. 1/2

@m8ta @ngaylinn @brembs The formula for the next multi-billion dollar tech company is an application or OS that integrates AI in such a way that 1) reliably gives accurate and relevant response 2) integrates offline documents and databases into queries 3) honors IP restrictions and 4) automates repetitive tasks seamlessly. This would make AI far more accessible, and I think that's where it's headed next. Microsoft and Apple took the PC on the same trajectory so I think this is plausible. 2/2

@psboyce @ngaylinn @brembs

Agree with this, but I suggest that we think even bigger -- the line between 'software' and 'model' will start to blur to the point that anyone can communicate with their software (hardware permitting), imbuing it with their desires / curated features from the data.

If the barrier to editing operationalized data (= models/sw) dramatically lowers, it should push the equilibrium of closed vs open source further towards open. (Or so goes my utopian dream.)

@m8ta @ngaylinn @brembs I'm not sure what you're trying to say but I think only data engineers would care about this. If the average person cared then the average person would be a computer programmer.
@psboyce You're absolutely right, but my argument is that if it's easier to change programs, a larger (small) fraction of people will do it.

@ngaylinn @brembs No no, you misunderstood *my* post. I never claimed it was a good blueprint for AGI but your window of attention is far too short for that kind of nuance. That's all irrelevant anyway because you clearly don't understand GPT or deep learning either. I digress.

You COMPLETELY missed my broader point, which is that at a fundamental, philosophical level we have no way to distinguish human sapience from any other kind of sapience. That which behaves sufficiently human, is. Goodbye