In the brain, neural populations are observed to have a variety of different dynamical regimes, depending on e.g. behavioral context (e.g. UP/DOWN, asynchronous, gamma oscillatory, etc etc).

This is modeled in neural nets (spiking and rate) by modulating control parameters, such as a gain term on synaptic weights, which are thought to model the effects of neuromodulators, and bring the network via bifurcations to specific regimes (chaotic, oscillatory, etc).

In ANNs trained to do complex tasks, scaling the weights like this would tank performance and completely change the fixed points and other attractors learned by the network.

What gives?

@dlevenstein My guess is that it has to do with the extra complexity inherent between the interactions of the arousal system and the unique topological circuits that are distributed around the brain. Makes a huuuuge difference whether neuromodulatory receptors are on pre vs post-synaptic neurons, excitatory vs inhibitory cells and whether the receptors cause ultimate increase or decrease in excitability.

@macshine was hoping you might have something to say about this Mac 😁

I think you’re at least partially right - neuromodulatory systems don’t just turn simple knobs of a local circuit. They turn specific combinations of knobs, and there could be something in that complexity that can modulate dynamics without blowing up the representational repertoire (i.e. attractor structure and input-output relationships). But I think that begs the question as to what the special modulatory sauce is, and how it’s learned+maintained (evo, devo, post-devo learning?). It’s not obvious to me why changing a combination of control parameters, or making those c.p’s more complex, would be able to maintain circuit functionality across different bifurcations except in networks that solve very simple tasks and thus have a very simple representational repertoire...

I also wonder if there’s something critical about the representational repertoire of local circuits, and how it’s learned, that results in dynamics-robust representations (which we don’t get in current approaches to training RNNs, so it isn’t trivial).

@dlevenstein it might be too gross of an oversimplification, but I like to think of all organisms/brains being forced to operate in a regimen that allows for dynamic reconfigurations as a function of need. If they weren't, the organism/animal wouldn't be able to deal with the world around them, which is highly dynamic + context dependent. So in other words, the flexibility is baked in from day dot. RNNs aren't currently designed with this constraint, so they become fragile/flimsy/etc.
@macshine so would you say the brain actually does do the “hacky solution” in my response to @lili
above? (Really wishing for a quote toot here 😅) It’s a matter of including control parameter variation during training so the network is forced to learn dynamics-invariant solutions?
@dlevenstein @lili it's an interesting hypothesis, that's for sure. in a way, it's kind of what brains have had to deal with since their inception. for this reason, i definitely think that the idea of a bunch of noise in the long-term "training" of biological brains is super important for figuring out some of these weirder aspects of how it works (see also: dreams; hallucinations; psychedelics; people who like reality tv; etc)