By inserting LLMs as tools of widespread use we are also broadcasting a specific set of values and beliefs. Who’s values in terms of education, income and lifestyle will them represent? Which geopolitical, ideological and religious positions would they promote?
Paradigms matter; who decides what goals AI systems will pursue? Can we achieve complex goals without the freedom of creating intermediate goals? Can statistically averaging systems find new revolutionary hypothesis?
Abstraction and compositionality are examples of general properties and they should not be expected to be ‘emergent’ from complexity, but at the core of their architecture. How many of those principles can we put forward already?
Measuring intelligence in individuals is questionable in humans for non-therapeutic goals. Intelligence is in a good amount given to the individual from its socio-cultural environment. General intelligence is –forcibly- social, generational and technological
Very interesting, thought provoking episode. What we need most now in AI is asking the right questions; what is understanding and generality? Which paradigms are best suited for them?
If something will allow building true AI, it is distilling principles of intelligence from biology and particularly humans. Those general principles would apply to any intelligence, here or in another planet. That doesn’t mean a naïve copying of apparent features of human mental processes
https://www.youtube.com/watch?v=31VRbxAl3t0&ab_channel=MachineLearningStreetTalk
#104 - Prof. CHRIS SUMMERFIELD - Natural General Intelligence [SPECIAL EDITION]
YouTubeBrains are fundamentally (in architecture and function) concerned with autonomous regulation and equilibrium at all scales. We designed computers with a completely different goal and it is a significant challenge making them process information like brains
In a way, brains as computers is such a bad analogy that little cognitive advantage comes from it. While probably meaning something conceptually broader about information processing systems, we point to something that is architecturally and functionally very limited
Second, truly understanding the nature of intelligence requires metrics beyond "solving many tasks": a very stupid agent can solve many tasks if helped by the environment (eg rewards) and a perfectly intelligent one can be put in conditions of failing all tasks
For general AI, first, the paradigm of using a huge “training sets” plus some “objective function” is doomed. We don’t need/have large statistical inputs for executing large and novel causal sequences and there is no generic function for an arbitrary collection of goals
Betting time! I bet in 2023 no big lab will create a system capable of solving (100%) the –quite trivial- MiniGrid reasoning environment
Isn’t it silly thinking that building tools that beat humans at more and more complex tasks will lead to human level intelligence? Isn’t it clear that human intelligence is the capacity of building tools in the first place?