Following on the idea that the theories we will need to tackle the complexity of the brain have not been developed yet (e.g. https://mastodon.social/@NicoleCRust/109472784550141853)

What types of up and coming theoretical(ish) frameworks are you most excited about? Dynamical systems / RNNs? Topology? Network theory? Something else entirely?

@complexsystems @cogneurophys @PessoaBrain @SussilloDavid @carlosbrody @Neurograce @neuralreckoning @tyrell_turing @DrYohanJohn @cian @WiringtheBrain @tdverstynen @neuralengine (Anyone?)

@NicoleCRust @complexsystems @cogneurophys @PessoaBrain @SussilloDavid @carlosbrody @Neurograce @neuralreckoning @DrYohanJohn @cian @WiringtheBrain @tdverstynen @neuralengine

My guess is we will see greater theoretical advances when we get a better mathematical hold on the deep connections between RL, control, and inference (real traction, not the hand-wavy active inference version we currently have).

Best version I've seen of this was from Sergey Levine:

https://arxiv.org/abs/1805.00909

Reinforcement Learning and Control as Probabilistic Inference: Tutorial and Review

The framework of reinforcement learning or optimal control provides a mathematical formalization of intelligent decision making that is powerful and broadly applicable. While the general form of the reinforcement learning problem enables effective reasoning about uncertainty, the connection between reinforcement learning and inference in probabilistic models is not immediately obvious. However, such a connection has considerable value when it comes to algorithm design: formalizing a problem as probabilistic inference in principle allows us to bring to bear a wide array of approximate inference tools, extend the model in flexible and powerful ways, and reason about compositionality and partial observability. In this article, we will discuss how a generalization of the reinforcement learning or optimal control problem, which is sometimes termed maximum entropy reinforcement learning, is equivalent to exact probabilistic inference in the case of deterministic dynamics, and variational inference in the case of stochastic dynamics. We will present a detailed derivation of this framework, overview prior work that has drawn on this and related ideas to propose new reinforcement learning and control algorithms, and describe perspectives on future research.

arXiv.org

@NicoleCRust @complexsystems @cogneurophys @PessoaBrain @SussilloDavid @carlosbrody @Neurograce @neuralreckoning @DrYohanJohn @cian @WiringtheBrain @tdverstynen @neuralengine

For me, this is the key missing piece of theory because it could help us explain why a system largely focused on homeostasis (animals' bodies and their organs) evolved into a system that can do RL and which learns an internal model of the world.

@tyrell_turing @NicoleCRust @complexsystems @cogneurophys @SussilloDavid @carlosbrody @Neurograce @neuralreckoning @DrYohanJohn @cian @WiringtheBrain @tdverstynen @neuralengine

What if animals don't learn "internal models of the world"?? 😮
Different schools of thought in this respect...

@PessoaBrain @tyrell_turing @NicoleCRust @complexsystems @cogneurophys @SussilloDavid @carlosbrody @Neurograce @neuralreckoning @cian @WiringtheBrain @tdverstynen @neuralengine

What about vicarious trial-and-error then?

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5029271/

I admit there are other interpretations when it comes to non-human animals, but in humans we clearly have all sorts of models of the world.

Vicarious trial and error

When rats come to a decision point, they sometimes pause and look back and forth as if deliberating over the choice; at other times, they proceed as if they have already made their decision. In the 1930s, this pause-and-look behaviour was termed ‘vicarious ...

PubMed Central (PMC)
@DrYohanJohn @PessoaBrain @tyrell_turing @NicoleCRust @complexsystems @cogneurophys @SussilloDavid @carlosbrody @Neurograce @neuralreckoning @cian @WiringtheBrain @tdverstynen @neuralengine
Psychology (and neuroscience) would be a very different field if Tolman had had more influence than Skinner. Tolman was criticized because his rats were too busy thinking!