Alberto Romero who writes some of the most knowledgeable and thoughtful commentary on AI has just posted "GPT-4: The Bitterer Lesson".

https://thealgorithmicbridge.substack.com/p/gpt-4-the-bitterer-lesson

Originally an idea credited to Peter Sutton, the "Bitter Lesson" is that "humans have contributed little to the best AI systems we have built".

And with each iteration of GPT-X that contribution is becoming less and less. From bitter, to bitterer.

Worth a read.

#Sentientsyllabus #ChatGPT #GPT4 #bard #llm

GPT-4: The Bitterer Lesson

Something to think about

The Algorithmic Bridge

@boris_steipe

I'm not sure if you can have a #state #phase #transition with just #computation. You can definitely "scale up" some existing capability by ading more computational power, but can you get (evolve to) something radically new?

@pj

Absolutely. All of biological evolution is like that. Once you start selecting for something, and you have enough parameters, you will achieve it. LLM training and biological evolution are analogous in that respect.

What's rather surprising is the richness of emergent behaviour that we get from merely predicting the next token.

Apparently, training for language creates thinking as a byproduct.

šŸ™‚

@boris_steipe

I don’t know. #Language is a fairly new ā€œimprovementā€ in biological evolution, and I’m not sure you can reverse engineer (artificial) #intelligence from it.
You could argue that intelligence evolved before language. After all, you have quite a few intelligent animals with no language or with a very simple vocabulary.

@pj

:-) When I pointed out the analogy between #LLM training and biological #evolution, I wasn’t referring to biology’s ability to evolve language, but its capacity to develop almost anything at all: given enough tuneable parameters and a system for inheritance with variation under selective pressure, we get molecular rotors, nanofabrication, diffraction gratings for colours, magnetic compass, eyes, ears, labyrinths, social collaboration … and even language.

The number of tuneable parameters in the human genome is almost two orders of magnitude smaller than the number of parameters in GPT-3 (and less precise: only 2bit / four nucleotides). And training a language model means to follow a trajectory in a high-dimensional parameter space, just like evolution is a trajectory in genetic-sequence-space. (The technical difference is that the former is a directed walk along a gradient, while the latter is a random walk under selection).

And what happened is : when trained on token-prediction, LLMs started showing emergent aspects of ā€œintelligenceā€.

Quanta just ran an article on that five days ago: https://www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/

… and Google’s Jason Wei, who has a number of articles on arXiv on emergence, lists emergent abilities on his blog: https://www.jasonwei.net/blog/emergence

Molecular biology is one of my areas of expertise, I am not surprised this is happening: how could it not? But then again, to actually see such emergence is profound.

#Sentientsyllabus #ChatGPT #GPT4 #Bard #ai

The Unpredictable Abilities Emerging From Large AI Models | Quanta Magazine

Large language models like ChatGPT are now big enough that they’ve started to display startling, unpredictable behaviors.

Quanta Magazine

@boris_steipe

I glanced over the sources you listed but won’t pretend I understand everything that’s in there šŸ˜‰
I guess what I’m trying to say is that, for example, a bird and an airplane both show the emergent property of flying while being two totally different ā€œmachinesā€.
I believe where we differ in our views is that for you their flying is identical, or the flying of the airplane might be even superior to that of the bird, while for me they are quite different processes that cannot be compared so easily.
Also, #evolution and #learning are two completely different processes. Evolution depends on large pools of (imperfect) #copies of the same ā€œthingā€, while learning is more like the #growth of a single individual having the ability to ā€œlearnā€ (modify their internal #state).