I was interviewed by The Economist's Babbage podcast on their series, "The science that built AI" last month. My hour long conversation was edited to about six minutes!

I am glad they edited/fit my conversation as taking the perspective that this big data, big compute driven deep-net approach is orthogonal to human/biological vision. And that, without incorporating biological principles (in this case, vision), autonomous visual navigation systems (i.e., self-driving cars) are unlikely and/or limited.

Unfortunately, the podcast requires a subscription to The Economist (I too had to access it from my university account!). But if you do have access, let me know what you think!

https://open.spotify.com/episode/4adN2gVRkQctA55Q0xswiO

#Neuroscience #History #AI #Deepnets #BiologicalIntelligence #BiologicalVision #HumanVision #MachineVision #TheEconomist #Babbage #MachineLearning

Babbage: The science that built the AI revolution—part three

Listen to this episode from Babbage from The Economist on Spotify. What made AI take off? A decade ago many computer scientists were focused on building algorithms that would allow machines to see and recognise objects. In doing so they hit upon two innovations—big datasets and specialised computer chips—that quickly transformed the potential of artificial intelligence. How did the growth of the world wide web and the design of 3D arcade games create a turning point for AI?This is the third episode in a four-part series on the evolution of modern generative AI. What were the scientific and technological developments that took the very first, clunky artificial neurons and ended up with the astonishingly powerful large language models that power apps such as ChatGPT?Host: Alok Jha, The Economist’s science and technology editor. Contributors: Fei-Fei Li of Stanford University; Robert Ajemian and Karthik Srinivasan of MIT; Kelly Clancy, author of “Playing with Reality”; Pietro Perona of the California Institute of Technology; Tom Standage, The Economist’s deputy editor.On Thursday April 4th, we’re hosting a live event where we’ll answer as many of your questions on AI as possible, following this Babbage series. If you’re a subscriber, you can submit your question and find out more at economist.com/aievent. Listen to what matters most, from global politics and business to science and technology—subscribe to Economist Podcasts+For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account. Hosted on Acast. See acast.com/privacy for more information.

Spotify
@skarthik What is something that you wish had not been cut out. a question, an answer. whatever.

@anandphilipc

Ooh... great question! I don't remember the exact words I said, but maybe, something about the fact that:

1) I don't believe incorporating neurons or convolutions makes the network now tractable to solve/understand biological vision. I personally think using these networks as models of the world (brain) is a wild-goose chase.
2) Gradient descent/optimization approaches are quite removed from how humans and animals learn.
3) I spoke at length about gestalt psychology. Now it is vulgarized, by especially object recognition researchers (who can't seem to read anything outside of deep-nets) as "mid-level vision". We are nowhere near addressing those ideas with these models, and my suspicion is we won't get there by simply image computable deepnets.

So in summary: deep-nets are a remarkable engineering feat that work splendidly well in narrow domains, but are a red-herring for doing science.

@skarthik superb. I dont know much about the vision bits but 1 and 2 make a lot of sense to me. I think this was also explored in the paper are connectionist systems neurally plausible, but from the point of are weights and biases neurally possible.
@skarthik Is that because biological neurons dont behave like ANNs do or some other reason?

@anandphilipc

Neurons being one difference. There are several other major differences, at the level of learning, network dynamics, functions, and behaviors too.

@skarthik got a papers to recommend that provide an overview? how we think neurons compute kinds.

@anandphilipc

This is a good overview of some of the issues.

https://osf.io/preprints/psyarxiv/5zf4s

OSF

@skarthik thank you@

@anandphilipc @skarthik

Fav way of saying this:

» However, I will argue at various points in this book that failure to attend to the differences, to disanalogies between living things and artificial devices, has been a mistake.
...
The brain of a genetically modified lab rat is no more like a computer than the brain of a wild rat, regardless of the part played by humans in the animal’s creation. «
1 Introduction, Footnote 6

#TheBrainAbstracted
#MazviitaChirimuuta
https://mitpress.mit.edu/9780262378635/the-brain-abstracted/

The Brain Abstracted

An exciting, new framework for interpreting the philosophical significance of neuroscience.All science needs to simplify, but when the object of research is ...

MIT Press

@teixi One of the best things i've read about connectionist models and their relationship to the brain is this paper and I do not understand why it is not widely cited.

Are connectionist models neurally plausible? A critical appraisal
PAPADATOU-PASTOU M.

http://www.encephalos.gr/48-1-01e.htm

Encephalos Journal

@anandphilipc

Indeed, single author, cool analysis!

Navigating between model abuse & use as exploration tool, reminded of:

» The neuroconnectionist research programme «
https://arxiv.org/abs/2209.03718

The neuroconnectionist research programme

Artificial Neural Networks (ANNs) inspired by biology are beginning to be widely used to model behavioral and neural data, an approach we call neuroconnectionism. ANNs have been lauded as the current best models of information processing in the brain, but also criticized for failing to account for basic cognitive functions. We propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who showed that the core of scientific research programmes is often not directly falsifiable, but should be assessed by its capacity to generate novel insights. Following this view, we present neuroconnectionism as a cohesive large-scale research programme centered around ANNs as a computational language for expressing falsifiable theories about brain computation. We describe the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses. Taking a longitudinal view, we review past and present neuroconnectionist projects and their responses to challenges, and argue that the research programme is highly progressive, generating new and otherwise unreachable insights into the workings of the brain.

arXiv.org