Looks like “a [completely uninterpretable] deep neural network [with substantial unreported hyperparameter and architecture tuning] reproduced [some aspects of] our brain data”

has replaced

“a simple computational model capturing our proposed mechanism reproduced our brain data”

as the new figure 7 strategy for high-profile neuroscience papers

@cian Has this approach ever produced generalizable insights? It always seemed a bit like a cottage industry sort of thing. Though I admittedly stopped paying attention to those sorts of claims years ago after being rather underwhelmed by the hand-wavy justifications given to support their asserted value.
@babagley which approach are you asking about, classic models or deep neural networks, or both?
@cian My bad. Drawing what are claimed to be rich analogies between deep learning (rate encoded networks, especially) and neurophysiology. I would expect the upper bound on the usefulness of deep learning analogies can’t come close to reaching the bound for mechanistic models.
@babagley agreed on that… with the caveat that none of the mechanistic models can solve actual functional tasks at anything close to the performance DNNs. So for questions where “aspects of a neural circuit important for solving a task” are central, maybe DNNs in physiological aspects get outweighed. Maybe…