AI is cognitive automation, not cognitive autonomy

Like the rest of computer science, AI is about making computers do more, not replacing humans.

Sparks in the Wind

@fchollet

A balanced take about AI. Simply put, what is currently considered intelligent by mainstream media, is cognitive automation aka programming. What is new is how an engineer programs; it used to be in one step, by writing down the instructions, now it is in two steps, having added the learning phase.

And, as for intelligence, the term learning, as it is used today is not appropriate. We should call it statistical curve fitting.

@fchollet

Back to the article, I don't think we will ever achieve self driving cars with current technology. Unless we constrain the environment, as it is done in warehouses.

About the breakthrough needed for Cognitive Autonomy, it could be not as far away as one could immagine!

@ocrampal @fchollet I agree. The difficulty is in getting people to appreciate what's required for a breakthrough in Cognitive Autonomy and strong AI when they only see everything through the lens of Cognitive Automation and narrow AI.

I'm thinking we need the Autonomy equivalent of MNIST as a benchmark test. Something publicly available that demonstrates the ability of an agent to adapt to unknown environments.

@KarlaParussel @fchollet
People believe that everything there IS can be UNDERSTOOD and subsequently described (Church/Turing). The assumption leading to this belief is that IS and UNDERSTOOD are exhaustive (they are not https://geneosophy.com/the-temples/ ).
That's to say that a test for Autonomy is a contradiction in terms. Because autonomy cannot be understood, as is the case for intelligence, but can be comprehended.
#understanding
The temples

Science is considered to be at the pinnacle of human achievements, a truth seeking enterprise of rationality and objectivity, based on the scientific method introduced by Galileo. Given the innumerable successes of science, one does not feel the need to take a leap of faith when embarking in the scientific

Geneosophy

@ocrampal @fchollet We can't develop a system we can't also test and evaluate.

The current paradigm is to train on static unchanging labelled data sets, then test and deploy without the system being able to update itself.

Whereas an autonomous agent is part of a sensorimotor loop. It senses its environment, performs an action which changes the environment it is sensing. It must continually adapt and update itself.

This requires a very different form of testing.

@KarlaParussel @fchollet
Yes, what you describe is very different from what we call testing today. You seem to want to test for a behaviour (autonomy).

A changing environment is qualitatively different from MNIST (for example), because there is no model to test the "intelligent artifact" against.

I agree with you that people should clearly distinguish Cognitive Autonomy. But it cannot be done with a "traditional" test. New conceptual tools are needed.

@ocrampal @fchollet Yes. New conceptual tools are needed, but also new forms of testing.

I believe there's a whole class of problems which can't be trained for because the real world function producing the data changes too quickly. Autonomous AI falls into this class.

We need to create toy tests that are publicly available so people can start developing AI for this class of problems and compare performance. It would perform the same role as a benchmark that MNIST did for DL / ML.

@KarlaParussel @fchollet
Yes, the class of problems you refer to are all related to the concepts of life, intelligence and reasoning. Those problems are of a qualitative different kind than those that are amenable to the traditional engineering approach, aka #understanding. The traditional approach is based on developing models that can be tested. Whereas the new class of problems requires a different approach.
@KarlaParussel @fchollet
To simplify, the new approach should not be based on demonstrations, but on "monstrations". You will not test a model against data; you will have to judge the behaviour of an autonomous, creative "artifact" "living" in a undescribable environment.

@ocrampal @fchollet

It sounds counter-intuitive but actually I think you can use tests based on a time series of data (e.g. seismic or market data), where the agent needs to predict the next time step in the data.

It doesn't test for autonomy per se because the agent actions won't change the next time step like it would change its environment.

But an agent controller that has to adapt to an environment it can change should also be able to adapt to a time series of data it cannot change.

@KarlaParussel @fchollet
Yes it is counterintuitive because you are implicitly testing a model on a datastream. But you would also require that model to change based on the datastream.

Traditionally we do not know what a changing model would entail. And by changing model I'm not referring to changing its parameters, but changing its "form".

@ocrampal @fchollet
Well it's funny you should say that 😀

I do believe that strong AI is only possible using self organising systems. These are systems whose form are solely changed by their inputs.

If you don't use a self organising system that is agnostic to the cause of its inputs, then by necessity you are coupling the internals of the system to its environment. This then stops it adapting to unknown environments or datastreams and remains narrow AI.

@KarlaParussel @fchollet

I believe that a "self-organising" system that "monstrates" autonomy and creativity is possible. And we are actually working on it.

But that requires new conceptual tools and new words for those concepts.

Self-organising, system, environment, inputs etc. are all loaded words that refer to traditional concepts.

@KarlaParussel @fchollet

What I mean by that is that it is hard to talk about the new conceptual tools and the expressive framework that goes with it using traditional concepts.

But I share most of your intuition, and I feel you are asking the right questions.

@ocrampal @fchollet I agree. This chat has been useful for me because after working in isolation for so long it does highlight which words are loaded and assumed to refer to traditional concepts.

Part of the scientific method is learning how to communicate new concepts and that can only be done through chats like this! 😀