AI doesn’t learn like we do. GoPro'd toddlers see mostly knees—but they can ID a dog after a few glimpses, while LLMs need billions of examples. Neuroscience suggests AI intelligence is gathered very differently than ours https://www.linkedin.com/posts/jonippolito_neuroscience-ailiteracy-generativeai-activity-7312811017170808832-dwUW
#Neuroscience #AIliteracy #GenerativeAI #GenAI #LLM #ChildDevelopment #Psychology #DevelopmentalPsychology

#neuroscience #ailiteracy #generativeai #genai #llm #childdevelopment… | Jon Ippolito
This is not an AI-generated image, but a photo from research suggesting the way we're training AI is nothing like the way babies learn about the world. Insights from developmental psychology undercut the parallel between AI models and human brains. Mounting a video camera on a baby’s head might seem like the premise for a goofy TikTok video, but for Indiana University psychologist Linda Smith, toddler-sized GoPros offered a way to investigate the role that bodies play in learning. Turns out babies spend a lot more time viewing knees than they do the family dog, but they are able to recognize other objects as dogs based on remarkably few examples in the data. Attempts to train AI on this footage, by contrast, have led to very poor results. Machine vision typically needs way more images of dogs to recognize one that isn't in the training set. The same goes for the number of documents—in the billions—required to simulate language effectively. (Hence the "Large" in Large Language Models.) When learning other tasks, human experience can again diverge from the paradigm of an LLM. fMRI brain scans by MIT neuroscientist Evelina Federenko show specific circuits lighting up when subjects engage the linguistic parts of their brains. Yet these same patterns are often inactive when subjects are engaged in other intellectual tasks, like adding numbers. To some observers, the success of LLMs is evidence that language is the engine of thought—yet neuroscience suggests some intelligences hardly require any language at all. Quality of data matters as much as quantity, but not in the way you might think. The curated photo feeds on Flickr or Reddit may be out of a toddler's reach, but on the other hand the blurry table legs in their purview are not biased by your culture's views of what a wedding photo or gym selfie should look like. The jury may be out on whether transformer models possess intelligence, but if so they come upon it very differently than humans do. (I'm glad to see complexity theorists weigh in on this debate; for more on findings from researchers like Smith and Federenko, check out the Santa Fe Institute's Complexity podcast at https://lnkd.in/ewzYQfs4.) Photo of Linda Smith's baby headset by Eric Hanus #Neuroscience #AIliteracy #GenerativeAI #GenAI #LLM #ChildDevelopment #Psychology #DevelopmentalPsychology