Aran Nayebi

233 Followers
678 Following
22 Posts

Assistant Professor of Machine Learning, Carnegie Mellon University (CMU)

Building a Natural Science of Intelligence 🧠🤖

Prev: ICoN Postdoctoral Fellow @MIT, PhD @Stanford NeuroAILab

Personal Website: https://anayebi.github.io/

Google Scholar: https://scholar.google.com/citations?hl=en&user=zGDaMYAAAAAJ searchable on tfr

Websitehttps://cs.cmu.edu/~anayebi
Publicationshttps://scholar.google.com/citations?hl=en&user=zGDaMYAAAAAJ&view_op=list_works&gmla=AJsN-F6EkQv3ly2qXwNUq567cBmyYyzA4jb72MsKG5qmrRu_po7d3UX44RXAsg0JPzHWPPpFXnuPSQv1yH0AEasSfkG9HkGF94E6fUDa-oQUligds4LeQOH4nWfg8mydvCMi-2QJftTZ
Twitterhttps://twitter.com/aran_nayebi

I'm thrilled to share that I'll be joining Carnegie Mellon's (CMU) Machine Learning Department as an Assistant Professor this Fall!

My lab will work at the intersection of neuroscience & AI to reverse-engineer animal intelligence and build the next generation of autonomous agents.
Learn more here: https://anayebi.github.io/files/NeuroAgents_LabPlanIntro_2024.pdf

Feel free to email me if you’re interested or want to collaborate! I’m able to advise PhD students who are either in any department in SCS or the Neural Computation program.

Really enjoyed speaking with Alison Snyder about the importance of studying embodied cognition in both neuroscience & AI!

https://www.axios.com/2024/03/15/artificial-intelligence-neuroscience-brain-body

What real bodies can show artificial minds

Some AI researchers think "embodied cognition" is a necessary ingredient to achieve advanced AI.

Axios
I am now also on: @anayebi.bsky.social
On the impossibility of using analogue machines to calculate non-computable functions

A number of examples have been given of physical systems (both classical and quantum mechanical) which when provided with a (continuously variable) computable input will give a non-computable output. It has been suggested that these systems might allow one to design analogue machines which would calculate the values of some number-theoretic non-computable function. Analysis of the examples show that the suggestion is wrong. In Section 4 I claim that given a reasonable definition of analogue machine it will always be wrong. The claim is to be read not so much as a dogmatic assertion, but rather as a challenge. In Sections 1 and 2 I discuss analogue machines, and lay down some conditions which I believe they must satisfy. In Section I discuss the particular forms which a paradigm undecidable problem (or non-computable function) may take. In Sections 5 and 6 I justify any claim for two particular examples lying within the range of classical physics, and in Section 7 I justify it for two (closely connected) examples from quantum mechanics, and discuss, very briefly, other possible quantum mechanical situations. Section 8 contains various remarks and comments. In Section 9 I consider the suggestion made by Penrose that a (future) theory of quantum gravity may predict non-locally-determined, and perhaps non-computable patterns of growth for microsopic structures. My conclusion is that such a theory will have to have non-computability built into it.

arXiv.org

This comes about a year and a half late -- but if you are interested in learning a bit more about how AI can help us understand questions in neuroscience, the official permanent url of my PhD Dissertation "A Goal-Driven Approach to Systems Neuroscience" can be found here: https://purl.stanford.edu/qk457cr2641, and is now also on arXiv as well: https://arxiv.org/abs/2311.02704

If interested, the video of my dissertation defense can be found here: https://www.youtube.com/watch?v=WED5GPKEv4Q

A goal-driven approach to systems neuroscience

Humans and animals exhibit a range of interesting behaviors in dynamic environments, and it is unclear how our brains actively reformat this dense sensory information to enable these behaviors. Exp...

Ten years ago, I received a handwritten manuscript by Alan Turing’s only PhD student, Robin Gandy. He wrote it a couple years before passing away in 1995.

AFAIK, it’s never before been in print, so I typeset my copy & put it online here: https://philpapers.org/archive/GANOTI.pdf

A bit of my backstory regarding this manuscript, for those interested in philosophy, physics, and computation: https://twitter.com/aran_nayebi/status/1722302534327701543

Now also featured on MIT 's main TikTok page! https://www.tiktok.com/@mit/video/7297257256834403627
MIT on TikTok

Can robotics help us understand the brain? #robotics #ai #brain

TikTok

Why a (reductionist) statement from a popular undergrad neuroscience textbook is misleading

"In neuroscience, there is no need to separate mind from brain; once we fully understand the individual and concerted actions of brain cells, we will understand our mental abilities."
Bear, Mark; Connors, Barry; Paradiso, Michael A.. Neuroscience: Exploring the Brain, Enhanced Edition (p. 24).

The problem with this statement is that "more is different".
https://cse-robotics.engr.tamu.edu/dshell/cs689/papers/anderson72more_is_different.pdf

In that classic paper, Anderson explains why, in broad strokes, all of science doesn't just boil down to physics if elementary physical laws govern all happenings in the universe. The answer is that "the ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe." This is because new properties emerge from collective behavior of the parts. Because of it, we need chemists, biologists, psychologists and sociologists (not just physicists).

In principle, one could infer the collective behavior of a system by exhaustively studying its parts and all their interactions; this is indeed where emergent properties come from (they are not mystical). In practice, this is not how science works; rather, when studying a complex system, one begins with the phenomenon that needs to be explained and then investigates its mechanism. For instance, if you want to understand the collective behavior of birds flying, you begin with an understanding that they flock (and some description of it); you don't begin by studying the rules that govern the flying decisions of one bird and then two and then three and the build a model of collective behavior that suggests flocking (and then, for the first time, take a peek).

The problem with the Bear et al statement is that it implies that the order of operations is to exhaustively investigate the brain alone before taking a peek at the mind/behavior. It's precisely the fallacy that Anderson wrote about in 1972. (Note: I suspect the authors don't really believe it and it's just badly worded; I suspect that what they intended to argue against was a non-material soul that causally influences behavior; but I don't know.)

Nice to see our latest work on mental simulation featured on the front page of MIT News today!
https://news.mit.edu/2023/brain-self-supervised-computational-models-1030

Full Paper (#NeurIPS2023 Spotlight): https://arxiv.org/abs/2305.11772

If interested in more detail, here is a talk of mine that explains some of the motivations of using Self-Supervised Learning for Embodied AI to study the brain: https://www.youtube.com/watch?v=9h_3bHVDMhA

The brain may learn about the world the same way some computational models do

New MIT studies support the idea that the brain uses a process similar to a machine-learning approach known as “self-supervised learning.” This type of machine learning allows computational models to learn about visual scenes based solely on the similarities and differences between them, with no labels or other information.

MIT News | Massachusetts Institute of Technology

"Can robotics help us understand the brain?"

A fun short video on our recent #NeurIPS2023 Spotlight paper!

https://www.youtube.com/shorts/tQsCNEpGYCo

Full paper: https://arxiv.org/abs/2305.11772

Can robotics help us understand the brain?

YouTube