In the brain, neural populations are observed to have a variety of different dynamical regimes, depending on e.g. behavioral context (e.g. UP/DOWN, asynchronous, gamma oscillatory, etc etc).
This is modeled in neural nets (spiking and rate) by modulating control parameters, such as a gain term on synaptic weights, which are thought to model the effects of neuromodulators, and bring the network via bifurcations to specific regimes (chaotic, oscillatory, etc).
In ANNs trained to do complex tasks, scaling the weights like this would tank performance and completely change the fixed points and other attractors learned by the network.
What gives?