@pj
:-) When I pointed out the analogy between #LLM training and biological #evolution, I wasnāt referring to biologyās ability to evolve language, but its capacity to develop almost anything at all: given enough tuneable parameters and a system for inheritance with variation under selective pressure, we get molecular rotors, nanofabrication, diffraction gratings for colours, magnetic compass, eyes, ears, labyrinths, social collaboration ⦠and even language.
The number of tuneable parameters in the human genome is almost two orders of magnitude smaller than the number of parameters in GPT-3 (and less precise: only 2bit / four nucleotides). And training a language model means to follow a trajectory in a high-dimensional parameter space, just like evolution is a trajectory in genetic-sequence-space. (The technical difference is that the former is a directed walk along a gradient, while the latter is a random walk under selection).
And what happened is : when trained on token-prediction, LLMs started showing emergent aspects of āintelligenceā.
Quanta just ran an article on that five days ago: https://www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/
⦠and Googleās Jason Wei, who has a number of articles on arXiv on emergence, lists emergent abilities on his blog: https://www.jasonwei.net/blog/emergence
Molecular biology is one of my areas of expertise, I am not surprised this is happening: how could it not? But then again, to actually see such emergence is profound.
#Sentientsyllabus #ChatGPT #GPT4 #Bard #ai