Ari Benjamin

318 Followers
129 Following
24 Posts

my corners of computational neuroscience: neuroAI, transcriptomics, learning theory, vision.

Postdoc @ CSHL with Tony Zador

Twitterhttps://twitter.com/arisbenjamin
Websitehttps://ari-benjamin.com

My self-advice for writing grants is to write each sentence in the voice of David Attenborough.

Good writing is a story. A story has an arc. A story has a theme. A story has characters. A story has resolution. The best stories, though, are also narrated by a kind British man with a passion for nature and education.

I'm crowdsourcing career advice. I want to study ⭐​ What humans find easy or hard to learn ⭐​ Tell me: what does this bring to mind for you? Whose research? What approaches?

I'm open to suggestions spanning all fields, including:
- learning science
- critical period & controlled rearing research
- deep learning theory
- dev. psych

⭐​ What defines the line between easy vs. hard tasks?
⭐​ When can brain areas change specialties (think chess experts, blind individuals), and what determines their new specialty?
⭐​ How do learning biases sculpt the adult brain?

Help me build a reading list or find mentors!

Great #review on normative #synaptic #plasticity models from Colin Bredenberg and Cristina Savin:

https://arxiv.org/abs/2308.04988

Desiderata for normative models of synaptic plasticity

Normative models of synaptic plasticity use a combination of mathematics and computational simulations to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work on these models, but experimental confirmation is relatively limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata which, when satisfied, are designed to guarantee that a model has a clear link between plasticity and adaptive behavior, consistency with known biological evidence about neural plasticity, and specific testable predictions. We then discuss how new models have begun to improve on these criteria and suggest avenues for further development. As prototypes, we provide detailed analyses of two specific models -- REINFORCE and the Wake-Sleep algorithm. We provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.

arXiv.org

"For at least two centuries, scientists have been enthralled by the “zombie” behaviors induced by mind-controlling parasites. Despite this interest, the mechanistic bases of these uncanny processes have remained mostly a mystery."

https://elifesciences.org/articles/85410

Neural mechanisms of parasite-induced summiting behavior in ‘zombie’ Drosophila

In zombie fruit flies, Entomophthora muscae-elicited summiting behavior is mediated by blood-borne factors and the host circadian-neurosecretory network.

eLife

I felt called out by this, as a scientist:

"Our success – my success – is the community's success. Your talent, your skill, I will celebrate it because I also see that as mine – even though you are the one that is performing that song. Because we are so interconnected as a community, I am practicing to see your joy as my joy. So there’s freedom there, there’s a freedom in sharing the happiness. There’s freedom in sharing the success and in the growth also."
– Br. Pháp Hữu

I'd love to feel more of this sentiment in science. What of one's work and ideas is truly and solely one's own?

Interpretable AI really wants to understand what neurons in LLMs are doing. But this effort is very likely to fail – and it's not the right approach to understand what AI is doing and why.

Like, today, there's weirdly a lot of press about how OpenAI just showed that "Language models can explain neurons in language models" (https://openai.com/research/language-models-can-explain-neurons-in-language-models). But look at the metrics – this was a failed effort. GPT-4 *cannot explain* what neurons in GPT-2 are doing.

More importantly, single-unit interpretability in LLMs is not the same as understanding why and what LLMs as a whole are doing. Even if you did understand when a handful of units activate, you will never be able to stitch these together into a general understanding of why an LLM says the words that it does.

LLMs may someday be able to explain themselves in plain language. But describing (in plain language) when each neuron fires is not going to get us there.

#interpretableAI #LLMs #openai

Language models can explain neurons in language models

We use GPT-4 to automatically write explanations for the behavior of neurons in large language models and to score those explanations. We release a dataset of these (imperfect) explanations and scores for every neuron in GPT-2.

I love this preprint from Tzuhsuan Ma and Ann Hermundstad for its point that, as a theorist, you can't separate "optimal" sensory representations from "optimal" behavior. The optimal action depends the constraints upon a sensory system.(https://www.biorxiv.org/content/10.1101/2022.08.10.503471v1)

For background, there's lots of theory about the optimal way an animal can update its beliefs about the world (a sensory problem) and, separately, the optimal way to act given one's beliefs (an action problem). This separation is fine as long as one has optimal beliefs. But biology is constrained. Sub-optimality means that the action problem is no longer disjoint from the sensory problem – evolution must tailor representations for action.

The analysis is beautiful. One sees first-hand how Bayesian-like behavior does not imply a truly Bayesian program.

I hear you there asking, "What makes us unique as humans? Where did language come from, evolutionarily speaking?" No answers here, but I just learned that chimpanzees have a homolog of Wernicke's area – complete with leftward asymmetry – and also a "Broca's area" that activates during communication.

e.g.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2880147

Wernicke's area homologue in chimpanzees (Pan troglodytes) and its relation to the appearance of modern human language

Human language is distinctive compared with the communication systems of other species. Yet, several questions concerning its emergence and evolution remain unresolved. As a means of evaluating the neuroanatomical changes relevant to language that accompanied ...

PubMed Central (PMC)

MotorNet is an open-source python toolbox built on Tensorflow that makes training neural networks to control realistic biomechanical models fast and accessible to non-experts, enabling teams to focus on concepts and ideas over implementation.

https://oliviercodol.github.io/MotorNet/build/html/index.html

MotorNet 0.1.5 documentation

Why is your body tense right now? But really, the deep why. Why do we store emotions in our bodies?

We must waste loads of ATP each day and night sustaining muscle tension in this way. There ought to be some reason we evolved this way.

I wrote a highly speculative blogpost positing a reason why: our bodies (read:musculature) act to store information about our behavioral state and situation. This is why intervening in this state-storage loop with a massage, sauna, or other body-centered practice can change your mood. https://aribenjamin.github.io/embodied-emotion/

Why do we love a sauna?

Towards a computational theory of embodied emotion.

Ari Benjamin