Are Intertwined"
my corners of computational neuroscience: neuroAI, transcriptomics, learning theory, vision.
Postdoc @ CSHL with Tony Zador
| https://twitter.com/arisbenjamin | |
| Website | https://ari-benjamin.com |
my corners of computational neuroscience: neuroAI, transcriptomics, learning theory, vision.
Postdoc @ CSHL with Tony Zador
| https://twitter.com/arisbenjamin | |
| Website | https://ari-benjamin.com |
One way to interpret this result is with efficient coding, the idea that neurons optimally represent the world despite unresolvable constraints (like noise). To explain an increase in acuity, one needs to say that 1) perception is always optimal with age, and 2) neural noise decreases with time. (e.g. Kiopes & Movshon (1998)).
We wanted to examine a different possibility: what if the brain's learning algorithm simply prefers to learn low-frequency information first? To demonstrate this, we created a very-very-simple model system โ matrix multiplication โ and asked what gradient descent would learn first if trained to reconstruct natural images.
Note that the "optimal solution" is trivial โ let W be the identity matrix โ and does not better represent any aspect of the data.
Time for a #tootprint to celebrate a publication! I want to share a simple demo of the paper's main concept here โ five-lines-of-Python simple.
Back in the '80s, researchers Luisa Mayer and Velma Dobson (among others) found that babies and young children slowly get better at seeing fine details as they age. This improvement is *linear* in time. (This is visual acuity: the finest spacing of a grating that can be resolved before it appears pure gray.) What explains this steady improvement?