In his new paper https://www.cs.toronto.edu/~hinton/FFA13.pdf, G. Hinton points our how current #neuralmodels based on #backpropagation should be replaced by mechanisms that don't contradict biological evidence and that "may be superior to backpropagation as a model of learning in cortex and as a way of making use of very low-power analog hardware without resorting to #reinforcementlearning".
In "Cognitive Design for Artificial Minds", 2021 (https://www.amazon.com/dp/1138207950/), I argued the same! #AI#CogSci converge again!

@antoniolieto

Yup! I've agreed with that since 1987!

@strangetruther it is not at all a new position indeed. But it is largely a minoritarian one in the current scientific literature in AI
@antoniolieto
Both very true! And the 2nd sentence - particularly strange that it should be true!
But the reasons:
"Perceptrons" etc. are very nice as far as they go, but their maths was understood before that for other models, and now there's so much invested in them, they've taken the oxygen from other approaches... and anyway maths tends to dominate!
With new ideas, people get asked too early to show maths.