Picking up on some of the BIG IDEAS in brain research, which was wonderfully chaotic when we last discussed in December under the hashtag #BrainIdeasCountdown, e.g. https://neuromatch.social/@NicoleCRust/109557289393362842

Here's an attempt to fill in some blanks, and let's flip the hashtag: #BigBrainIdeas. I'll focus on the notion that there are facts, ideas and then there are "Big Ideas" and I'll focus on the last one. Please join in!

I'd argue that one of the most influential Big Ideas about the brain in the latter half of the 20th century is the is the notion that:

The neocortex of the brain is made up of a generic functional element that is repeated again and again and from this repetition, all of cortical function emerges

I'm talking about the cortical column, first described by Vernon Mountcastle in 1957. The unit contains ~10K neurons and humans have ~25 million of them. The rapid evolution of humans is proposed to have followed from a rapid expansion of cortex that happened because of this repetitive crystalline structure. The gist behind the "functional" bit is that each unit always does the same generic computation, and the different functions of different brain areas result from the different inputs that these units receive. @TrackingActions very nicely summarizes the ideas here: https://www.nature.com/articles/s41583-022-00658-6

So what does this generic functional unit do? Proposals vary. One idea, also reflected in deep convolutional neural networks, is that it does two(ish) things: selectivity and invariance, stacked repetitively to support things like recognizing objects. Other proposals suggest that the brain is a prediction machine and each unit contributes a little bit to those predictions in a manner that relies not just on feedforward connectivity, but also feedback. Some proposals suggest that the function of the unit varies along a gradient as a consequence of biophysical properties like receptor expression: https://www.nature.com/articles/s41583-020-0262-x.

Among brain researchers, this Big Idea is polarizing - obvious to some and misguided to others. Where are you in terms of your 'buy in' with this big idea?

#neuroscience #psychology #neuroAI #cognition @cogneurophys #BigBrainIdeas

Nicole Rust (@[email protected])

Here's a slightly more provocative way to pose the question: In The Idea of the Brain, Matthew Cobb argues, "In reality, no major conceptual innovation has been made in our overall understanding of how the brain works for over half a century ... we still think about brains in the way our scientific grandparents did." Setting aside semantic debates about what constitutes a "major conceptual innovation", brain researchers are clearly working on a large number of ideas that their grandparents had not thought of. But what are those, exactly?

Neuromatch Social
@NicoleCRust @TrackingActions @cogneurophys I am not sure that this idea has actually been very influential on the grand scale of neurocog theories. The modern theoretical approach seems to revolve around understanding how brain areas connect in networks to solve problems, and I can't see how generic computation would inspire/drive this perspective.
@bwyble @TrackingActions @cogneurophys
The idea spawned maps like this, which have been highly influential for neurocog theories, no?
@NicoleCRust @TrackingActions @cogneurophys I thought that diagram was the result of neuroanatomy studies. Why would a generalized function theory lead to a highly specific wiring diagram like this?
@bwyble
Yes, but ... Felleman & Van Essen defined the hierarchical levels of this diagram according to the cannonical microcircuit rule: L4 receives input; L2/3=feedforward output; L4/5=feedback output.

@NicoleCRust @bwyble

That rule was derived from anatomy too.

@DrYohanJohn @NicoleCRust I agree with Yohan, I don't think the microcircuit is crucial there, rather it's observing that there are laminar patterns that are more or less ubiquitous.
@bwyble @DrYohanJohn
Interesting! To me, those ideas could not be more connected.

@NicoleCRust @bwyble @DrYohanJohn

shameless plug, read my CONB on this topic with Adam K, for my more edited, fleshed out opinion!
https://www.sciencedirect.com/science/article/pii/S0959438822001246

(just insert "microcircuit motif" whereever you read cell-type, Adam K. and I disagree about which phrase is the better one :p)

I get the argument about confusing implementation and computation, but I agree with with @NicoleCRust that the idea of canonical cortical computations has been super influential, especially / at least in vision (which is all of computational neuro anyway, amiright?)

I think the idea of simple, repeated computation is kind of necessary / permissive for a certain types of "grand unified theory" that are very intuitively appealing exactly because they squash together computation and implementation into one little thing that is intuitively understandable in words. The fact that these "theories" are conceptually "small" down to implementation mean that people understand them, they catch on, they drive research.

Now whether this is positive or not, is an open debate. But I think there's no way we can say that ideas like predictive processing, backprop, divisive normalization, and maybe even convolution havent been wildly influential in the field.

To state this in a slightly more aggressive way than I feel: I do think the idea of a canonical microcircuit is very useful, because it's studiable! If we just start off by assuming everything is brain soup it's very easy to just give up and assume we'll never understand implementation. I rather start off with the assumption that there exist architectural motifs that matter, and try to take that as a hypothesis from which to start, than just admit defeat.

So maybe the answer is, it depends on the level of explanation you are lookign for. If you dont care baout multilevel understanding then "how brain regions connect in networks" black box type of understanding is enough. I personally think it's only step 1, and then understanding more details of implementation is step 2. Furthermore, I've come to think that the implementation detaiuls will probably constrain and help us understand the higher level.

@achristensen56

I agree completely! In fact this aligns very nicely with what I said about normative arguments. It would definitely be nice if we could think of each cortical area/column as processing its inputs in a generic manner. But it's an empirical question whether the idealization is true.

And it may well be that a second term in the infinite series (which I am considering in part because of my anatomist colleagues) may add functional flexibility.

@NicoleCRust @bwyble

@achristensen56 @NicoleCRust @bwyble

In other words, there is a whole continuum between "columns are interchangeable" and "each cortical column is a unique snowflake". Systematic architectonic variation has been observed: the challenge now is for a few computational neuroscientists to imagine a few functional/computational stories for why it's there.

This is why I used the term "cortical spectrum": it's not giving up, any more than the EM spectrum requires giving up.

@DrYohanJohn @achristensen56 @bwyble
I like this idea, a lot.

@NicoleCRust @DrYohanJohn @achristensen56 @bwyble

This discussion makes me think that if the cortical columns are elementary circuits for computations that are compounded to form the cortical surface, does this imply that these computations necessarily involve topographical maps?

I find it hard to imagine that the same computations could be performed if the microcircuits were randomly placed on the cortical surface. The total length of the wiring would not allow it, while minimizing the length of the wiring would produce topographic maps (retinotopy, orientation maps, somatosensory maps, ...).

@laurentperrinet @NicoleCRust @DrYohanJohn @achristensen56 I think there are several reasons for using topographic representations. One of them is reducing intracolumnar wiring costs as you say.

But also, topography is a way to innately encode information about the environment. In the physical world, sampled light from nearby locations is more likely to be from the same object/surface/location, and so it makes sense to cluster the processing of light according to its location on the retina. This makes it easier for attentional modulation to select the cortical areas processing a given object/boundary. The same argument can be made for the auditory, somatosensory, motor domains, and this can be extended to higher order cognitive aspects as well, like language, etc.

@bwyble @laurentperrinet @NicoleCRust @achristensen56

Yup. Topography is a great way to group signals in a way that preserves spatial relations (including in abstract spaces).

What sorts of topographic map have been found in higher cognitive areas? Even place cells show no topography, given remapping etc.

@DrYohanJohn @bwyble @laurentperrinet @NicoleCRust @achristensen56 Topographic maps have been shown for numerosity and timing, illustrating that the computational benefits of topography extend to -at least some- cognitive functions in association cortex

@dumoulin

Interesting! How stable are they from one task to another?

@bwyble @laurentperrinet @NicoleCRust @achristensen56