🚨🚨🚨NEW PREPRINT🚨🚨🚨

A big mystery in brain research is what are the neural mechanisms that drive individual differences in higher order cognitive processes. Here we present a new theoretical and experimental framework, in collaboration with Vincent Tang, Mikio Aoi, Jonathan Pillow, Valerio Mante, @SussilloDavid and Carlos Brody.

https://www.biorxiv.org/content/10.1101/2022.11.28.518207v1

1/16

@marinopagan @SussilloDavid
This is really exciting. Congratulations!

Here we've been talking a lot about complex dynamical systems, and I have so many questions.

First one: Over on the bird site, @SussilloDavid wrote, "Hell, I’ll go bigger. This work provides strong indirect evidence for the computation thru dynamics framework."

Can you please unpack that just a bit more @SussilloDavid ?

@PessoaBrain @DrYohanJohn @manlius @kordinglab @neuralengine @cogneurophys
@complexsystems

@NicoleCRust @marinopagan @SussilloDavid @PessoaBrain @DrYohanJohn @manlius @neuralengine @cogneurophys @complexsystems yeah. I am quite unclear how else we could compute than through dynamics. So would be very curious about the answer.

@kordinglab @NicoleCRust @marinopagan

I’d distinguish computation without dynamics as feed forward computation.
y = f(g(h(x))) isn’t dynamical imo
x_n = f(f(f(…(x_0)))) is imo

As vision has historically been the paradigm, the top equation has been the predominant approximation in neuro.

Recurrence (bottom) brings a different set of formalisms and appears be useful in other, more traditionally ignored areas.

@SussilloDavid @NicoleCRust @kordinglab @marinopagan it seems to me to be a very strange definition of dynamics. Even a feedforward system has dynamics. As a case in point, some held the sharp overshoot responses of V1 and MT cells to be evidence for some kind of feedback gain control system - but they can be entirely explained by the transient dynamics of linear filters (as described in any undergrad control or circuits textbook).
@marinopagan @SussilloDavid @kordinglab @NicoleCRust of course, feedback systems can have much more rich and complex dynamics, and I certainly believe that these are important for neural computation. But if feedback vs feedforward computation is what is meant, would not that be a better term?
@neuralengine @marinopagan @kordinglab
@NicoleCRust
In idealized system the top equation doesn’t have dynamics. In brains, where there are complex dynamics in every neuron even for an AP, then I see your point from your first post.
Feedforward works fine, but feedback to me implies controllers, optimal control, etc. CtD was meant rather the pursuit of how a massively recurrent network of simple neurons computes.
@neuralengine @marinopagan @kordinglab
@NicoleCRust
Ten years ago, we could barely train RNNs, and nobody understood how an optimized rnn worked on toy problems. Now we can understand trained rnn solutions on toy problems.
Relatedly, outside of single neuron integrators in LIP, or population vectors, few had any clue about dynamics of neurons eg motor or frontal cortex.
We now have sophisticated hypotheses, even if maybe wrong.
@SussilloDavid @marinopagan @kordinglab @NicoleCRust I'm definitely not disagreeing that feedback is important (and you are using the word recurrence - to me they are the same thing - we are not talking about linear control systems, optimal or otherwise). I think this is likely just a semantic issue. But I would say that computation can use dynamics even in a feedforward system, just as a logical point (although it is simpler dynamics).