In preparation for our #CCN2023 @CogCompNeuro GAC next week, I’m going to do some polls here this week to take the temperature of the room. 🌡️

Very curious to see the range of answers so please pass it on 🔁🙏 and feel free to elaborate - we'll try to take any discussion into account at the workshop

📊🧵 #neuroscience #neurobuzz

https://gac.ccneuro.org/gacs-by-year/2023-gacs/2023-1

GACs - 2023-1

Reconciling the dichotomy between Sherringtonian and Hopfieldian views on neural computations Organizers & Speakers at CCN 2023: Dongyan Lin, McGill University Arna Ghosh, McGill University Jonathan Cornford, McGill University James Whittington, Stanford University Tatiana Engel, Princeton

First: cognition is best explained by a ___________ view?

(see: https://www.nature.com/articles/s41583-021-00448-6)

Sherringtonian
2.9%
Hopfieldian
54.3%
Neither
14.3%
Don’t get the difference
28.6%
Poll ended at .
Two views on the cognitive brain - Nature Reviews Neuroscience

Neuroscience can explain cognition by considering single neurons and their connections (a ‘Sherringtonian’ view) or by considering neural spaces constructed by populations of neurons (a ‘Hopfieldian’ view). In this Perspective, Barack and Krakauer argue that the Hopfieldian view has the conceptual resources to explain cognition more fully the Sherringtonian view.

Nature
The best explanations of cognitive phenomena will involve circuits made up of particular neuron to neuron connections realized by specific neurons with fixed biophysical identities and utilizing particular neurotransmitters to pass signals between them.
Agree
15.4%
Disagree
84.6%
Poll ended at .
The best explanations of cognitive phenomena will involve circuits made up of neuron to neuron connections realized by neurons with biophysical identities and utilizing neurotransmitters to pass signals between them.
Agree
29.6%
Disagree
70.4%
Poll ended at .
Cognitive phenomena are well-explained by computations performed by networks of nodes with weighted connections between them.
Agree
26.5%
Disagree
73.5%
Poll ended at .
The best explanations of cognitive phenomena will involve neural spaces that describe the massed activity of e.g. neural ensembles or brain regions, with a low-dimensional representational manifold embedded within them.
Agree
43.8%
Disagree
56.3%
Poll ended at .
Cognitive phenomena are well-explained by movement within representational spaces or transformations from one space to another.
Agree
62.5%
Disagree
37.5%
Poll ended at .
Explanations in terms of computations performed by networks of nodes with weighted connections and explanations in terms of representational spaces are
Complementary
86.2%
Competing
13.8%
Poll ended at .
An explanation for a cognitive phenomenon that appeals to the statistics of neural connections (e.g. low-dimensional connectivity structure) or their intrinsic properties (e.g. mixture of E and I cells) is
Sherringtonian
27.8%
Hopfieldian
33.3%
Either (depends on?…)
27.8%
Neither
11.1%
Poll ended at .
Explanations of cognitive phenomena in terms of neural manifolds can be causally tested without understanding their underlying mechanisms
Agree
56.5%
Disagree
43.5%
Poll ended at .
Neural manifolds are produced by circuit mechanisms
Agree
78.9%
Disagree
21.1%
Poll ended at .
Establishing connections between neural connectivity and low-dimensional representational manifolds is possible in _________ of the neural circuits that support cognition.
all
25%
an exemplary subset
33.3%
a too-simple subset
41.7%
none
0%
Poll ended at .
Unifying manifold and circuit approaches is important to causally test theories about the neural computations that underlie behavior.
Agree
54.2%
Disagree
45.8%
Poll ended at .

You may have noticed it's been "circuits day" around here - inspired by @engeltatiana,
@chrismlangdon, and @mgenk's review (https://pubmed.ncbi.nlm.nih.gov/37055616/). So how about a punchy one from their discussion? 📊🧵

Manifold and circuit approaches to cognition are inseparable

Agree
25%
Disagree
75%
Poll ended at .
A unifying perspective on neural manifolds and circuits for cognition - PubMed

Two different perspectives have informed efforts to explain the link between the brain and behaviour. One approach seeks to identify neural circuit elements that carry out specific functions, emphasizing connectivity between neurons as a substrate for neural computations. Another approach centres on …

PubMed
An explanation for a cognitive phenomenon that appeals to learning an unspecified neural circuit is
Sherringtonian
20%
Hopfieldian
30%
Either (depends on?...)
20%
Neither
30%
Poll ended at .
The best explanations of cognitive phenomena will involve accounts of how they are learned, developed, or evolved, without necessarily specifying their implementation.
Agree
52.9%
Disagree
47.1%
Poll ended at .
If learning to perform a cognitive task reliably produces a circuit in which neurons respond to single cognitive or task variables, this would support adopting a ________ view.
Sherringtonian
33.3%
Hopfieldian
11.1%
Either (depends on?...)
33.3%
Neither
22.2%
Poll ended at .
If learning to perform a cognitive task reliably produces a similar neural manifold, but possibly with different circuit implementations across networks, this would support adopting a ________ view.
Sherringtonian
16.7%
Hopfieldian
16.7%
Either (depends on?…)
33.3%
Neither
33.3%
Poll ended at .
If learning to perform a cognitive task produces circuits that perform similar cognitive operations using different neural manifolds, this would support adopting a ________ view.
Sherringtonian
0%
Hopfieldian
20%
Either (depends on?…)
60%
Neither
20%
Poll ended at .
If learning to perform a cognitive task reliably produces a similar neural manifold with similar circuit implementations across networks, this would support adopting a ________ view.
Sherringtonian
20%
Hopfieldian
20%
Either (depends on?…)
40%
Neither
20%
Poll ended at .

You maybe saw overlap today with @djcrw work w/ @behrenstimb, so let’s go with something directly re: https://arxiv.org/abs/2210.01768

If cells that respond to single cognitive variables naturally emerge in neural circuits, and are useful to support cognition, this would support adopting a ______ view.

Sherringtonian
50%
Hopfieldian
0%
Either (depends on?…)
50%
Neither
0%
Poll ended at .
Disentanglement with Biological Constraints: A Theory of Functional Cell Types

Neurons in the brain are often finely tuned for specific task variables. Moreover, such disentangled representations are highly sought after in machine learning. Here we mathematically prove that simple biological constraints on neurons, namely nonnegativity and energy efficiency in both activity and weights, promote such sought after disentangled representations by enforcing neurons to become selective for single factors of task variation. We demonstrate these constraints lead to disentanglement in a variety of tasks and architectures, including variational autoencoders. We also use this theory to explain why the brain partitions its cells into distinct cell types such as grid and object-vector cells, and also explain when the brain instead entangles representations in response to entangled task factors. Overall, this work provides a mathematical understanding of why single neurons in the brain often represent single human-interpretable factors, and steps towards an understanding task structure shapes the structure of brain representation.

arXiv.org
Due to the details of their implementation, some cognitive phenomena will be best-explained from a Hopfieldian view while others will be best-explained from a Sherringtonian one.
Agree
57.1%
Disagree (only one)
14.3%
Disagree (neither)
28.6%
Poll ended at .
@dlevenstein What do you mean 'respond to'? Just a significant beta coefficient in some regression?

@dbarack you could make a tuning curve and use the tuning curve to reliably predict the neurons activity from that variable

If r is the rate of your neuron and x is your variable, something like r(t) ≈ E[r|x(t)]

@dlevenstein cool... Whether a neuron responds to some variable in a circuit seems to be utterly neutral between the two views. The question is about explanatory priority!
@dlevenstein
not sure what the alternative would be, presuming circuit mechanisms include node-intrinsic properties and dynamics
@jonny @dlevenstein agree it's difficult to imagine that's the case. i think if the nodes (neurons) give rise to a manifold due to their cell-intrinsic dynamics and *not* due to their connections, we wouldn't call it a circuit mechanism. circuit for me implies that synapses and their weights matter.

@beneuroscience
@dlevenstein
to me it seems like both are necessarily involved, no? the manifold is a subspace of the possible positions and trajectories in neural state space, and so if the circuit properties (connectivity patterns, types, etc.) didn't matter, then the manifold then is just equivalent to the entire neural state space - ie. every possible configuration of each individual neuron's activity, independent from all the others.

so like it seems like there is an alternative but it's trivial

@jonny @dlevenstein i don't agree "if the circuit properties didn't matter the #manifold is the entire state space."

population activity can be restricted to a manifold due to other reasons besides local connectivity (eg. patterned inputs, adaptation, etc.). if you block synaptic cnxns within the local circuit and the manifold is unchanged, we should conclude the manifold is not due to (local) circuit mechanisms.

@beneuroscience @jonny an illustrative example: the activity of a purely feedforward network could still lie on a low dimensional manifold, due to the correlational structure of its inputs. You could even imagine movement on that manifold emerging from cell-autonomous adaptive processes (say… Ih currents or V-activated K currents), with no connectivity-based mechanism.

@dlevenstein @beneuroscience @jonny There was a long convo about this on here recently. A manifold can be any topological space with a (locally) Euclidean metric, so any neural anything that is differentiable enough to define distances between points will allow someone to create a manifold. Another counterexample, I think, could be taking a random sample of not necessarily connected neurons and measuring the mitochondria trafficking rates along their axons, using the norm of the differences to make a metric https://proofwiki.org/wiki/Definition:Metric_Induced_by_Norm

[edit: this is such a general way to make a manifold that you could probably make one out of any set of measurements, even if the things measured have nothing to do with each other, which could be not useful and misleading even if mathematically possible 👍 ]

Any neural information could be used to make a manifold via information geometry as well, but it'd probably be a good idea to call it a neural information manifold or something to avoid confusion https://franknielsen.github.io/IG/index.html

Definition:Metric Induced by Norm - ProofWiki

@axoaxonic
@dlevenstein @beneuroscience
yeah basically this^

my example was I think more ridiculous than you both were giving it credit for lol. I was saying "literally delete all connections between every neuron of every kind" - so no feed forward anything, no local connectivity - where every neuron is entirely independent. then since by definition the state space of the (independent) neural population is the space of all possible states for each (independent) neuron, and there is no inter-cellular interactions that would define a more constrained manifold within the state space, dynamical manifold === state space, trivially.

basically I don't disagree with anything y'all were saying, I was just saying that for any useful definition of "neural activity manifold" you need to have some interaction between the neurons - ie. circuit mechanisms - to make it a different concept than just state space

@dlevenstein @beneuroscience

also @axoaxonic it's the constructing the metric part that neuroscientists usually uh skip lol. "hmm might as well treat the vector space induced by instantaneous firing rates over time as strictly euclidean and just take the euclidean distance between population measurements. no problem with that at all, nosiree, no curvature or holes or anything in this metric space"

@jonny @dlevenstein @beneuroscience Singularities are scary, and differential geometry and homotopy are hard. Smooth surfaces => smooth sailin'
@axoaxonic
@dlevenstein @beneuroscience
measuring distance as the crow flies between two points in neural state space and just getting blasted to smithereens by inhibition when my brain tries to traverse it.
@jonny @dlevenstein @beneuroscience Definitely a lot of cognitive dissonance between everyone talking about the nonlinear and nonlocal activity in the brain then producting very linear and very local mathematical objects to describe them

@jonny @axoaxonic @dlevenstein OK but related to the poll question, a relevant question is whether a manifold for a specific neural population critically depends on connections between its neurons or not. relevant because of importance of recurrence both in the brain and in RNNs.

(and not to drive this hypothetical too far, but I still don't agree about deleting all connections in the brain. a population of independent photoreceptors will never occupy the full state space. this is because they respond to different properties of light, not because of any connections. and as dan pointed out, you can still have meaningful movement on that manifold. i think we agree on most things but it doesn't all boil down to synapses for me. leave open room for gap junctions, electric fields, diffusion of neuromodulators, etc!)

@beneuroscience
@axoaxonic @dlevenstein
well how do you construct the state space? you could construct a state space that includes a bunch of states that are in principle impossible to occupy, and sure then random activity wouldn't fill it over arbitrarily long timescales, but you probably want to construct it in such a way that only the possible states are possible. so what I was saying is for practical purposes a tautology - neural dynamical manifolds are interesting precisely because they don't fill the possible state space.

that's not to say there can't be meaningful movement on a manifold of disconnected neurons in a dish eg. in response to light - all I was saying is the manifold (the metric space) is trivial, even if the dynamics on it aren't.

not sure what you mean re: recurrence, those are also connections? when I said "no connections of any kind" I meant literally any interaction that is possible, not just synapses, but treat each neuron as if its in a vacuum sealed container orbiting earth thousands of km from any other neuron. the point was just that without any interaction at all between them, then the manifold of their activity is not different than the state space of possible activity, and also that as a metric space its pretty uninteresting.

ie. any nontrivial definition of neural manifolds necessarily depends on circuit mechanisms (interactions between neurons), though not solely on them.

@jonny @axoaxonic @dlevenstein For practical purposes, let's assume we record a population of N neurons in a brain area, yielding a state space of dimensionality D = N. We do PCA and find that most variance is captured by a low-dimenional manifold of D << N.

I interpreted the poll question to be: given the experiment above, can we can conclude that the manifold is produced by connections between *those* N neurons? I argue that conclusion is illogical, but agree that it's often the case. Importantly, it is often *assumed* to be the case by neuroscientists, and often explicitly designed that way in models.

@beneuroscience
@axoaxonic @dlevenstein
ohhhhhhhhh gotcha. yeah of course you can't, definitely agree there. I mean PCA on vectors of binned firing rate is not really an estimate of a dynamical manifold anyway since it yno is predicated on assuming that all measurements are independent and therefore uh explicitly does not measure the dynamical part.

edit: and really since it operates on averages within a dimension makes additional strong assumptions about the metric space of neural activity, I love this paper that demonstrates the idea in a single neuron v elegantly. the same reasoning applies to nonlinearly coupled networks
https://journals.physiology.org/doi/full/10.1152/jn.00412.2001

and then ya to assume whatever covariance structure you're capturing with PCA is just the product of the neurons in view is way further off.

dang it really is too generic of a term to use without qualification huh.

@jonny @dlevenstein @beneuroscience

I've been taking forever to reply because I had work, pardon.

I was definitely doing the "read something very literally then infodump in a reply" thing, I see what you're saying. Was also taking the poll question very literally: anything neural + anything manifold.

A meaningful and useful manifold would have to reflect the interactions for sure, the underlying meaningful dynamics, instead of a bunch of disjoint state info. Otherwise it'd be easy to infer and interpret things about that aren't actually relevant, which is kind of a popular thing to do given all the cool shapes the data can make, but not really helpful for really understand what's going on.

@dlevenstein some #NeuroBuzz in the answers to this one 👀​
@dlevenstein I will say this is ambiguous though; an explanation of _what_? The same target for both? If the target is allowed to vary, then they may be complementary!
@dlevenstein
they're the same thing to me? just a different set of emphases - representational spaces are constructed against neural graphs
@dlevenstein blasphemy
@dbarack @dlevenstein I am pretty sure this poll counts as targeted trolling