On explanations in brain research:

A thread of the same idea comes up again and again in brain research. It's the notion that identifying the biological details (such as the brain areas/circuits or neurotransmitters) associated with some brain function (like seeing or fear or memory) is not a complete explanation of how the brain gives rise to that function (even if you can demonstrate the links are causal). To paraphrase:

Mountcastle: Where is not how https://www.hup.harvard.edu/catalog.php?isbn=9780674661882
Marr: How is not what or why http://mechanism.ucsd.edu/teaching/f18/David_Marr_Vision_A_Computational_Investigation_into_the_Human_Representation_and_Processing_of_Visual_Information.chapter1.pdf
@MatteoCarandini: Links from circuits to behavior are a "bridge too far" https://www.nature.com/articles/nn.3043
Krakauer et al: Describing that is not understanding how https://www.cell.com/neuron/pdf/S0896-6273(16)31040-6.pdf
Poppel: Understanding brain maps does not formulate "what about" the brain gives rise to "what about" behavior https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3498052/

Any other explicit references to add to this list? @Iris, @knutson_brain, Anyone?

Also, I imagine that some form of the opposite idea must also be percolating: the notion that 'algorithmic' descriptions of the type used to build AI will be insufficient to do things like treat brain dysfunction (where we arguably need to know more about the biology to, e.g., create drugs). Any explicit references of that idea? @albertcardona @schoppik, @cyrilpedia, Anyone?

#neuroscience #cognition #neuroAI #psychology #philosophy

Perceptual Neuroscience — Vernon B. Mountcastle

This monumental work by one of the world's greatest living neuroscientists does nothing short of creating a new subdiscipline in the field: perceptual neuroscience. Vernon Mountcastle has gathered information from a vast number of sources reaching back through two centuries, from phylogenetic, comparative, and neuroanatomical studies of the neocortex to rhythmicity and synchronization in neocortical networks and inquiries into the binding problem.

@NicoleCRust @MatteoCarandini @Iris @knutson_brain @schoppik @cyrilpedia

As a biologist by training, it seems self-evident that if we are to come up with preventive measures or a cure for e.g., neurodegenerative diseases, we ought to be explicitly studying biological neural networks that suffer from such diseases in the first place.

That is not to say that we aren't going to learn a lot from studies of how artificial networks work and behave. We will. But the cure, for instance, would most likely have to be of the biological kind, and even more likely, an intervention on the immune system, be it vaccination or otherwise–I'm referring to e.g., multiple sclerosis and the Epstein-Barr virus, as there may be many more such cryptic, decade-after-infection effects on the nervous system.

#neuroscience

@albertcardona
I agree. It's the answer to the apparent conundrum: Given that our ability to build brains is growing so fast (AI), why isn't fixing them? The answer: we aren't actually building brains; we are building algorithms.

Perhaps it's so self evident that no one has bothered to write it down (unlike it's complement, listed above)?

@NicoleCRust @albertcardona

There are a few parts to this - for the first one, how far a descriptive approach will get us in terms of understanding function, I like what Cori Bargmann said in our conversation (which started with an analogy to the Human Genome Project and pathophysiology): understanding the components won't be an explanation, but it will set the boundaries within which the explanation(s) must be found.

In terms of AI, for me the most likely to be useful approach is comparative: to treat it as we would an alien lifeform in questions around the origin of life - but it will not necessarily map directly on to any understanding of the biological brain.

@NicoleCRust @albertcardona

A somewhat related ref that I enjoyed this week was sent to me by Zach Mainen, a perspective on LLMs by Terry Sejnowski

https://direct.mit.edu/neco/article/35/3/309/114731/Large-Language-Models-and-the-Reverse-Turing-Test

Large Language Models and the Reverse Turing Test

Abstract. Large language models (LLMs) have been transformative. They are pretrained foundational models that are self-supervised and can be adapted with fine-tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and, more recently, LaMDA, both of them LLMs, can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions and debate on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a reverse Turing test. If so, then by studying interviews, we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs. As LLMs become more capable, they may transform the way we interact with machines and how they interact with each other. Increasingly, LLMs are being coupled with sensorimotor devices. LLMs can talk the talk, but can they walk the walk? A road map for achieving artificial general autonomy is outlined with seven major improvements inspired by brain systems and how LLMs could in turn be used to uncover new insights into brain function.

MIT Press

@NicoleCRust @cyrilpedia @albertcardona

Tx!
Also send my HT to Zach Mainen for such central quote in the central part of the ,In Silico’ documentary:

» No!
We don't really care about spikes in the dendrites!
We don't want to predict spikes in the dendrites!
We want to predict what is going to dooo!!! «

@NicoleCRust @MatteoCarandini @Iris @knutson_brain @albertcardona @schoppik @cyrilpedia

Are you suggesting that we should accept that knowing the brain circuitry is the "how",
OR that we don't have enough detailed knowledge,
OR that the "how" and "why" can't be explained by details of neuron connections?

I love when one layer of knowledge builds up the next layer - like chemistry to understand cell biology or neurophysiology to understand behavior.

@NicoleCRust @MatteoCarandini @Iris @knutson_brain @albertcardona @schoppik @cyrilpedia

Niv, Y. (2021). The primacy of behavioral research for understanding the brain. Behavioral Neuroscience

https://psycnet.apa.org/fulltext/2021-53272-001.html

@NicoleCRust @MatteoCarandini @Iris @knutson_brain @albertcardona @schoppik @cyrilpedia
Nicole, I am not sure this is what you are loking for:
Perich MG, Rajan K (2020): Rethinking brain-wide interactions through multi-region ‘network of networks’ models. Current Opinion in Neurobiology 65:146–151.
Stringer C, Pachitariu M, Steinmetz N, Reddy CB, Carandini M, Harris KD (2019): Spontaneous behaviors drive multidimensional, brainwide activity. Science 364:255.

@NicoleCRust @MatteoCarandini @Iris @knutson_brain @schoppik @cyrilpedia

Only to add that present-day "AI" is built on artificial neural network architectures inspired by early neuroscience work in the 60's and 70's or so, mostly the visual system of the cat as far as I know.

We now know that biological neural networks are far more complex in both architecture and operation, and it is unclear to what extent such complexity serves the house-keeping operations of the cellular substrate (the neurons and glia) or is very much part of the implementation.

On that last point: not unlike Adrian Thompson's 2005 "An evolved circuit, intrinsic in silicon, entwined with physics" https://link.springer.com/chapter/10.1007/3-540-63173-9_61 where surprising and unexpected aspects of FPGA operation contribute significantly to the performance of circuits evolved with genetic algorithms.

#neuroscience #computing

An evolved circuit, intrinsic in silicon, entwined with physics

‘Intrinsic’ Hardware Evolution is the use of artificial evolution — such as a Genetic Algorithm — to design an electronic circuit automatically, where each fitness evaluation is the measurement of a circuit's performance when physically...

SpringerLink