Crowdsourcing your ideas for the #BrainIdeasCountdown:

Before we all turn into Winter Holiday pumpkins: What are some most interesting ideas in brain research that I haven't highlighted yet? I've sketched out my own ideas for these last 2/10 days (promise!). But brain research is working on so much & I'm curious to hear your thoughts about what exactly that is. Here's my (random) list:

Idea10: Our moods depend on what's happening in our gut.

Idea 9: Across individuals, the same brain functions are implemented by biological details that vary a lot.

Idea 8: Consciousness level can be measured by measures of brain activity complexity.

Idea 7: Stimulation of the brain at multiple nodes may dance it from dysfunction back to normal function.

Idea 6: Gene therapy may circumvent the need to understand how mutated proteins lead to brain dysfunction.

Idea 5: Neurons in the brain influence one another through the electric fields that they generate, ephaptic coupling.

Idea 4: Our health and well-being is determined not just by our genes, but also the genes of those around us, "social genetic effects."

Idea 3: We rely on our memories of the past to predict the future.

Idea 2: We can control the excitability of neurons by shining light on them, optogenetics.

Idea 1: Free will is NOT an illusion.

  • Ideas 1 & 2 updated posthoc to complete the list.

For details, click here: #BrainIdeasCountdown

So: What haven't I highlighted yet?

Thinking about brain research this way is a bit of a twist on how we normally think about things. I would say that we tend to think more in terms of findings, eg "That paper found ..." whereas this is something more like, "That stack of papers is working on the idea that ..."

It's interesting to think about one's own work in that light: What ideas am I working on and who else is working on the same idea (perhaps with a different approach)? Similarly, what sorts of ideas is the field working on? And are these ideas new or old?

Here's a slightly more provocative way to pose the question: In The Idea of the Brain, Matthew Cobb argues, "In reality, no major conceptual innovation has been made in our overall understanding of how the brain works for over half a century ... we still think about brains in the way our scientific grandparents did."

Setting aside semantic debates about what constitutes a "major conceptual innovation", brain researchers are clearly working on a large number of ideas that their grandparents had not thought of. But what are those, exactly?

@NicoleCRust Matthew Cobb is here too @matthewcobb – has there been any recent idea on what the brain is or how it operates that wasn't a rehash of an idea from before 1970?
#neuroscience #brain
@albertcardona @matthewcobb
Thanks! And really great question. How about:
The brain is a complex recurrent dynamical system, Hopfield 1982.
https://www.pnas.org/doi/10.1073/pnas.79.8

@albertcardona @matthewcobb

Or this one? Across different individuals, the same brain functions are implemented by biological details that vary a lot. This is true even for simple circuits like the ones that control the stomach of a crab, where the numbers of ion channels can vary 2-6x across different crabs but the circuit always does the same thing.

https://www.sciencedirect.com/science/arti

@albertcardona @matthewcobb
I'd love to see anyone add to this list! But the main point is also really important for everyone to grasp, I think: there are many fewer things in this list than you might imagine.
@albertcardona @matthewcobb
A shocking correlate of this is that the vast majority of brain researchers never come up with a new idea about how the brain works. Which I don't throw out there to belittle (I'm one too) but to inspire the next generation: WE.NEED.NEW.IDEAS.ABOUT.HOW.THE.BRAIN.WORKS!

@NicoleCRust @albertcardona @matthewcobb
One of the things I've been struggling with recently is how the vast majority of papers (including most or arguably all of mine) don't propose an idea that could in principle get us closer to understanding how the brain does what it does. I have the feeling that there was this moment in time when people were coming up with tons of crazy theories. They were all wrong (probably) but it was exciting. Now we're just talking about how many dimensions a 'neural manifold' has and I just can't get excited about that (sorry manifold people). In my case, I think I've had a small handful of ideas that went in the direction I'd like neuroscience to be going in of proposing ideas that could scale to part of a full explanation of the brain, but I haven't pursued them because they were hard to define or get funding for. My resolution for 2023 is to focus more on those interesting questions and less on things that I think are easy to get published or get funding. For what it's worth, the biggest challenge to neuroscience I reckon is how it can operate in a stable way based on what seems to be a surprisingly unstable substrate (e.g. synaptic turnover). If I had a good idea about how to solve that problem, that's what I'd be working on.

Edited to add: I don't mean to criticise anyone's work! It's more a personal realisation that I've not been pursuing research directions that I believe could really lead to understanding the brain. On a metascience level, I think it's important that different people take very different approaches, most of which they will disagree on. If it's not like this, we won't make progress. My realisation is perhaps that I've been trying too hard to fit in and it's not working for me.

@neuralreckoning
@NicoleCRust @albertcardona @matthewcobb @WiringtheBrain

Dan, I'm not sure I agree with you but need to think. One place for ideas is in review/conceptual papers and I think there's a good amount of ideas going around in those. I have tried to be active on this end, or at least as much as time permits.

@PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain
There seems to be a small circle of people indeed doing work like this, notably you, Paul Cisek, Kevin here too, but even this conceptually exciting work is still very far from diffusing throughout the neuroscience community: most, unfortunately, still know nothing of it, which, I think, is the reality Dan's comment reflects.
@WorldImagining @PessoaBrain @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain
I didn't mean to criticise anyone's work! It's more a personal realisation that I've not been pursuing research directions that I believe could really lead to understanding the brain. On a metascience level, I think it's important that different people take very different approaches, most of which they will disagree on. If it's not like this, we won't make progress. My realisation is perhaps that I've been trying too hard to fit in and it's not working for me.
@neuralreckoning
@WorldImagining @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain
Didn't take it as a criticism at all. To the point of making more vigorous impact as @WorldImagining says, well that's part of science. Ideas have complex ways of diffusing and have their own dynamics... (sorry to be so predictable!)

@PessoaBrain @neuralreckoning @WorldImagining @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain A latent question throughout this thread is whether we would recognize a powerful new idea if it did come along (or has come along?...). It seems common to expect "new" ideas to be "exciting", etc, but I don't see those expectations as necessary or even desirable. If some new concept is a stepping stone to a powerful shift in understanding, it seems just as likely it would be difficult to reconcile with our current way of thinking. Otherwise, why didn't the shift already happen?

A really new idea about the brain is likely to be challenging and unreasonable, to be something we instinctively try to reject. More a thief in the night than a triumphal entrance, waiting for its importance to be discovered in retrospect. Like the old joke: every great scientific idea is wrong before it is obvious.

I expect you have counter-examples to offer, and I am dramatizing a bit. But a paltry return on the enormous number of person-hours going into brain science might arise not because we just haven't found that great new idea yet, but because we're fundamentally wrong about something, and error correction is psychologically harder than novelty detection.

Andrew Glennerster (@[email protected])

In summary, don’t imagine that the brain carries out complex 3D coordinate transformations (retinal -> egocentric -> world-centred). Instead, imagine a point moving across a high dimensional manifold of potential brain states and what that movement could achieve. 21/21.

Mastodon @ SDF
@jason_ritt @PessoaBrain @neuralreckoning @WorldImagining @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress
I am adding Clare Press as she has advocated spending more time thinking about frameworks. https://doi.org/10.1016/j.cub.2021.11.027. Jason is correct that our paltry progress is because we have a fundamentally wrong conception. Here is a 6min version of what I think is missing: https://www.youtube.com/watch?v=oDLtPY1e9bk (summary: the brain produces just a daub of paint, not a picture in one go).

@ag3dvr @jason_ritt @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

Thanks for sharing the video, very interesting. Is what you're calling for, the canvas, a metaphor for something similar to what the global neural/cognitive workspace is intended to refer to?

@WorldImagining @jason_ritt @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

Thanks for looking at it. No, I am thinking of something simpler (from a neural perspective), i.e. neural state -> action -> neural state -> action etc. This is easy for the brain to do and it is easy to see how it evolves from simpler organisms. But this sounds like neural control of action. The tricky bit is to apply it to perception. 1/3

@WorldImagining @jason_ritt @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

In reinforcement learning, this is called a 'policy network'. In the paper and the thread I refer to above, I have tried to illustrate this claim (ie that perception can be understood as a policy network) using 3D vision as an example, i.e.:
https://doi.org/10.1098/rstb.2021.0448
explained in a 21-part toot here:
https://mastodon.sdf.org/@ag3dvr/109541827847990553
2/3

Understanding 3D vision as a policy network | Philosophical Transactions of the Royal Society B: Biological Sciences

It is often assumed that the brain builds 3D coordinate frames, in retinal coordinates (with binocular disparity giving the third dimension), head-centred, body-centred and world-centred coordinates. This paper questions that assumption and begins to ...

Philosophical Transactions of the Royal Society B: Biological Sciences

@WorldImagining @jason_ritt @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

You ask what the canvas is. One simple example is what is often described as a 'spatial canvas' that unites foveal processing across eye movements. In this paper, the authors are interested in the non-retinotopic representation of the face:
https://doi.org/10.1016/j.cub.2019.01.077. 3/4

@WorldImagining @jason_ritt @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

More generally, the canvas is the manifold of potential neural states across which we move. That is where the interesting complexity of the brain lies, not in the apparatus that generates an instantaneous neural state. The instantaneous neural state is the daub of paint, the interesting complexity is all about linking these together. 4/4

@ag3dvr @jason_ritt @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress
Okay, I think I see what you mean about the canvas, though still have doubts about your opposing "instantaneous neural states." Such a thing, in my perspective, could exist only in the abstract, not experientially or physiologically. I don't know of any apparatus that generate "instantaneous states," since every neural or cognitive "module" develops under feedback..?
@ag3dvr @jason_ritt @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress
The workspace concept accounts for how, just like with a painting, every "daub" is constrained both by every previous relevant daub and also by the projected ultimate "painting," whatever that may be. In this sense, it fits with the canvas image, but precluding taking any daub to be independent. Are we on similar ground?

@WorldImagining @jason_ritt @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

Many thanks, @WorldImagining. For better or worse, I think I am saying something much simpler (at least at a neural level) than the ideas you are describing. 1/n

@WorldImagining @jason_ritt @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

First, by ‘current neural state’ I just mean r, the input set of firing rates to whatever store of synaptic weights we are assuming, W (I tend to assume this is the #cerebellum but that is not a critical assumption).

More detail here: http://wiki.glennersterlab.com/index.php?title=Notation

2/n

@WorldImagining @jason_ritt @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

So, r definitely has a potential physiological instantiation.

I will steer clear of 'experience', as this idea is sufficiently general that we can think of it applying throughout evolution, with the dimensionality of r (and the stored vectors in W) increasing in line with behavioural repertoire.

3/n

@WorldImagining @jason_ritt @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

BTW, you mention modules. Brain regions don’t appear in the description here. Of course, regional specialisation in the cortex is helpful in generating r, but that is a different topic.

4/n

@WorldImagining @jason_ritt @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

The above proposal is Markovian (output depends on the current state). Of course, people behave according to their history and their goals but the challenge is to explain that using only a Markovian mechanism.

Incidentally, I don’t see any alternative - anything else, at a neural level, seems to be invoking magic.

5/n

@ag3dvr @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress Side point: The usual problem with a Markov assumption is knowing if your state space is sufficiently comprehensive.

Your state is a vector of firing rates, at least for the sake of simplicity in your description.

One could push back and say, first, that firing rates are not well defined without further assumptions. The "actual" state is a set of membrane potentials across the spatial extent of the cell plus the states of all the membrane channels. It is not a given that "rate" is a sufficient stand-in for membrane state such that one can throw all that detail away and end up with a Markov process.

But even if that were ok, one also has all the cell's internal metabolism and transcription machinery. Once again, two times with the same rate might produce different future outcomes, so not Markov. And so on.

Challenging a Markov model is not invoking magic, usually it is just recognizing ignorance.

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

@jason_ritt Many thanks for this. Yes, you are right, the assumed input vector is extremely simplistic but that seems like a good starting point. If we could build a model of 3D vision using a vector of binary (on/off) input and synaptic weights, one could then see what extra subtlety could be added by including more graded signals. 1/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

Clearly, I have unwisely opened a semantic can of worms by mentioning Markovian processes when answering @WorldImagining's point. You are of course right to point out that two identical inputs giving rise to different outputs means that the i/o function is not Markovian. 2/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

This would not persuade me to back away from thinking about the i/o function as argmax(Wr) and, in your example, this of course means that W has to change after the first input. I am happy not to mention Markov. All I meant was that at any given instant, the output of the system is determined by the current input, r, and the current synaptic weights, W. 3/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

The past may influence r and W, of course, but at the time of the generation of the output, r and W are all you have. A valid criticism can be 'Well, if that is how you put it, then you are not saying anything helpful about the role of the past on the current decision.' That may be true. 4/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

But it does force one to think about judgements that rely on recent information. Take the face discrimination task we touched on in relation to the 'canvas'. Suppose it takes 3 saccades before a reliable discrimination between two faces can be made (eg hairline-mouth-nose) and that no single fixation is sufficient to perform well at the task.... 5/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

... In this case, it seems inescapable that r is different for two identical retinal inputs: fixating the nose as the third fixation or as the first fixation. That begs the question that you began with, i.e. about the state space being sufficient. It is a good one and is the right place to start. 6/6

@ag3dvr @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress I don't strongly disagree with your particulars. I frame things a little differently in my head.

First, I think the Wr construction is specific to artificial neural net lineages, and unlikely to be adequate at the the level of the whole brain (where this thread started), even if it is a decent model in more limited settings. There are too many regions with dynamics that are inelegantly described with rate functions (e.g. with strong transient responses, long inactivation, rebound bursting in thalamus). With a small abuse of history, I include Wilson-Cowen in that set of ANN models in this context.

I come from a dynamical systems background, so I'm generally ok including, say, gating variables or something like them as an intrinsic part of state. Of course, one could try to expand to a W\tilde{r} where the "rate" variable now includes things other than spike rates (and in fact we are sorta pursuing models of that kind), but I don't see a strong argument that
( dr/dt=sigma(Wr) )
is sufficiently universal or necessary as a brain model; it could just be
( dx/dt=f(x) )
whatever the universal approximator theorems are.

As a technicality, I don't strongly distinguish stochastic from deterministic Markov properties. So "distinct outputs for identical inputs" has to be interpreted in the probabilistic sense.

Second, I don't think it makes much sense to ask if the brain is Markovian. Models may or may not have the Markov property; it is not a property of physical systems. The argument is just what we already discussed. Every (>1D) system that can be described by a (reasonable) dynamical model can be approximated with either a Markovian or non-Markovian model via suitable transformation of (state,update_operator), at least if one is willing to work on function spaces or similar abstractions.

A practical example where this kind of flexibility might matter is construction of delay-embedding models with truncation after some number of terms. One gets to trade-off the dimension of the state they track vs how far into the future the model's predictions remain accurate. It is not always obvious that maximizing "Markovianess" is the best choice.

So I see the useful kinds of questions being along the lines of "What do we need to track as 'state' to make a Markovian model that does a good job of matching data?", or "How do we transform a non-Markovian model that is a good fit into an interpretable Markovian model?". That is, we prefer Markov, similar to how we prefer linearity, and prefer modularity, and prefer... but these are aspirations for models that are good enough, not to be confounded with the "real" properties of the physical system.

We should not ask whether a system has this property. We should ask what a good choice of state is. I think that is consistent with your statements, except that we put a different amount of value on Wr as a framing device.

Edit: Typo, and fighting the Latex interpreter.

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

Thank you for your reply, I really appreciate it.

I am very happy to concede all the points you make - I would not be in any position to do otherwise - and yet I think there is still a useful debate to be had in relation to the discussion that @NicoleCRust started. 1/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

My Marr/Albus/1960s/Wr/policy network summary of moving from one state to another may be much too crude, but the claim is that the interesting complexity lies elsewhere. 2/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

Specifically, the suggestion is that to understand the brain we should stop focussing on an individual step and worry, instead, about the nature of the manifold of potential states that the current state moves across. If we are to get to grips with the elegance and sophistication of the brain, that is where we should be looking. 3/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

We have discussed the problem of recognising a face and agreed that, in the example we considered, the processing must be non-Markovian in the sense that 3 fixations could accomplish the task where one (any one of the 3) could not. 4/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

This means, inescapably, that the path across state space is different (and not just the same locations on the manifold visited in a different order) if the fixations are hairline-mouth-nose versus nose-mouth-hairline. There is a lot of thinking to be done and as far as I can see we, as neuroscientists, are not really doing at the moment. 5/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

Maybe we are not the best people to do it. Maybe the part of the problem that #neuroscientists are genuinely equipped for is a single pass through the system/the processing that occurs in 300ms before the next saccade/one daub of paint, and we should leave the interesting part of solving the brain to others … 6/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

… who are better equipped to think about that part of the problem - for example, those developing #reinforcementlearning in active robots or autonomous vehicles. 7/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

After all, the problem of putting together many daubs of paint to make a painting (recognise a face/see in 3D/consider knight-to-e4) is not exclusive to the brain and it takes a different sort of training to think about policy networks in active, task-driven agents. 8/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

After all, the problem of putting together many daubs of paint to make a painting (recognise a face/see in 3D/consider knight-to-e4) is not exclusive to the brain and it takes a different sort of training to think about policy networks in active, task-driven agents. 8/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

Incidentally, I had a conversation yesterday with Marty Raff, an eminent developmental biologist https://en.wikipedia.org/wiki/Martin_Raff. We were talking about the remaining mysteries in biology. He described how development had been considered as one until all of a sudden it wasn’t. 9/n

Martin Raff - Wikipedia

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

The crumbling of the walls happened incredibly quickly. Marty assumed, just as you did in your thief-in-the-night toot https://neuromatch.social/@jason_ritt/109594701648246030, that the same will happen for the brain. 10/n

Jason Ritt (@[email protected])

@PessoaBrain @neuralreckoning @[email protected] @NicoleCRust @[email protected] @[email protected] @[email protected] A latent question throughout this thread is whether we would recognize a powerful new idea if it did come along (or *has* come along?...). It seems common to expect "new" ideas to be "exciting", etc, but I don't see those expectations as necessary or even desirable. If some new concept is a stepping stone to a powerful shift in understanding, it seems just as likely it would be difficult to reconcile with our current way of thinking. Otherwise, why didn't the shift already happen? A really new idea about the brain is likely to be challenging and unreasonable, to be something we instinctively try to reject. More a thief in the night than a triumphal entrance, waiting for its importance to be discovered in retrospect. Like the old joke: every great scientific idea is wrong before it is obvious. I expect you have counter-examples to offer, and I am dramatizing a bit. But a paltry return on the enormous number of person-hours going into brain science might arise not because we just haven't found that great new idea yet, but because we're fundamentally wrong about something, and error correction is psychologically harder than novelty detection.

Neuromatch Social

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

To me, it is clear what needs to be done and who is likely to do it. It would take a longer thread to explain (beginnings of a discussion here: https://mastodon.sdf.org/@ag3dvr/109616027263944503), but essentially the machine learning crowd will end up marching through a simulation of evolution (they are beginning to do this in various ways), … 11/n

Andrew Glennerster (@[email protected])

@[email protected] @[email protected] @[email protected] @[email protected] Here is a video of me trying to explain why neural nets and the cerebellum are similar: https://www.youtube.com/watch?v=NiRPq11wA-A Two changes in evolution: (i) dimensionality of the input (by assumption, cortical) vector and of the stored (by assumption, cerebellar) vectors (ii) the length of paths through that space to achieve rewards. I make some mistakes in that video, eg a policy network is _all_ the state-contingent actions (π(a|x)), not just a single instance. 2/n

Mastodon @ SDF

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

… gradually expanding the range of tasks that are necessary for survival and, crucially, gradually increasing the dimensionality of the state space (including richer sensory input) as and when this helps to distinguish contexts for action that could not be distinguished in a lower dimensional space. 12/n

@jason_ritt @WorldImagining @PessoaBrain @neuralreckoning @NicoleCRust @albertcardona @matthewcobb @WiringtheBrain @elduvelle @clarepress

They will build up policy networks much like those of animals including, in the end, us. At that stage, even if these are built using a very simple atomic unit (Marr/Albus/1960s/Wr/policy network), we will have understood much more about what the brain is doing than we do now. 13/13