Aran Nayebi

233 Followers
678 Following
22 Posts

Assistant Professor of Machine Learning, Carnegie Mellon University (CMU)

Building a Natural Science of Intelligence 🧠🤖

Prev: ICoN Postdoctoral Fellow @MIT, PhD @Stanford NeuroAILab

Personal Website: https://anayebi.github.io/

Google Scholar: https://scholar.google.com/citations?hl=en&user=zGDaMYAAAAAJ searchable on tfr

Websitehttps://cs.cmu.edu/~anayebi
Publicationshttps://scholar.google.com/citations?hl=en&user=zGDaMYAAAAAJ&view_op=list_works&gmla=AJsN-F6EkQv3ly2qXwNUq567cBmyYyzA4jb72MsKG5qmrRu_po7d3UX44RXAsg0JPzHWPPpFXnuPSQv1yH0AEasSfkG9HkGF94E6fUDa-oQUligds4LeQOH4nWfg8mydvCMi-2QJftTZ
Twitterhttps://twitter.com/aran_nayebi

I'm thrilled to share that I'll be joining Carnegie Mellon's (CMU) Machine Learning Department as an Assistant Professor this Fall!

My lab will work at the intersection of neuroscience & AI to reverse-engineer animal intelligence and build the next generation of autonomous agents.
Learn more here: https://anayebi.github.io/files/NeuroAgents_LabPlanIntro_2024.pdf

Feel free to email me if you’re interested or want to collaborate! I’m able to advise PhD students who are either in any department in SCS or the Neural Computation program.

How do humans and animals form models of their world?

We find that Foundation Models for Embodied AI may provide a framework towards understanding our own “mental simulations”.

Preprint: https://arxiv.org/abs/2305.11772
Summary: https://twitter.com/aran_nayebi/status/1660654623764381700

with awesome collaborators: Rishi Rajalingham, @mjaz_jazlab, and Guangyu Robert Yang

Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes

Humans and animals have a rich and flexible understanding of the physical world, which enables them to infer the underlying dynamical trajectories of objects and events, plausible future states, and use that to plan and anticipate the consequences of actions. However, the neural mechanisms underlying these computations are unclear. We combine a goal-driven modeling approach with dense neurophysiological data and high-throughput human behavioral readouts to directly impinge on this question. Specifically, we construct and evaluate several classes of sensory-cognitive networks to predict the future state of rich, ethologically-relevant environments, ranging from self-supervised end-to-end models with pixel-wise or object-centric objectives, to models that future predict in the latent space of purely static image-based or dynamic video-based pretrained foundation models. We find strong differentiation across these model classes in their ability to predict neural and behavioral data both within and across diverse environments. In particular, we find that neural responses are currently best predicted by models trained to predict the future state of their environment in the latent space of pretrained foundation models optimized for dynamic scenes in a self-supervised manner. Notably, models that future predict in the latent space of video foundation models that are optimized to support a diverse range of sensorimotor tasks, reasonably match both human behavioral error patterns and neural dynamics across all environmental scenarios that we were able to test. Overall, these findings suggest that the neural mechanisms and behaviors of primate mental simulation are thus far most consistent with being optimized to future predict on dynamic, reusable visual representations that are useful for Embodied AI more generally.

arXiv.org