Let me introduce myself! My name is Ilenna Jones, I am a computational neuroscientist with keen interest on how dendrites contribute to a singular neuron's ability to compute functions and learn tasks. I build biophysical models of neurons in #pytorch and use #deeplearning principles to investigate how #neuron models can learn and compute given their biologically realistic constraints.

I'm looking for postdocs right now! Feel free to connect if you're looking for someone like me!

I'm very interested in helping other students find resources and guidance as they consider science and #neuroscience in general. Feel free to connect if you're looking for advice/perspectives!

#computationalneuroscience #theoreticalneuroscience #neuroscience #neuralcomputation #learning #modeling #BlackInNeuro #FirstGen #Questbridge

ilenna.com

@ilennaj Is it bad tthat when I read this it started out to the melody of "The Humpty Dance"?

Great to have you here, we have a tone of neuroscience and computation neuroscience people here.. im more on the computational side myself but I have a toe in that arena.

Feel free to reach out if you have questions or just want to chat.

@ilennaj Interesting info. Thanks!

@ilennaj And you are the author of this most spectacular arXiv paper: "Can single neurons solve MNIST? the computational power of biological dendritic trees" Jones & Kording 2020 https://arxiv.org/abs/2009.01269 Hats off to you & @kordinglab ! And welcome.

#neuroscience #dendrites #MNIST

PS: subsequently published as "Might a Single Neuron Solve Interesting Machine Learning Problems Through Successive Computations on Its Dendritic Tree?" Jones & Kording 2021 https://direct.mit.edu/neco/article/33/6/1554/100576/Might-a-Single-Neuron-Solve-Interesting-Machine

Can Single Neurons Solve MNIST? The Computational Power of Biological Dendritic Trees

Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. This is in stark contrast to units in artificial neural networks that are generally linear apart from an output nonlinearity. If dendritic trees can be nonlinear, biological neurons may have far more computational power than their artificial counterparts. Here we use a simple model where the dendrite is implemented as a sequence of thresholded linear units. We find that such dendrites can readily solve machine learning problems, such as MNIST or CIFAR-10, and that they benefit from having the same input onto several branches of the dendritic tree. This dendrite model is a special case of sparse network. This work suggests that popular neuron models may severely underestimate the computational power enabled by the biological fact of nonlinear dendrites and multiple synapses per pair of neurons. The next generation of artificial neural networks may significantly benefit from these biologically inspired dendritic architectures.

arXiv.org

@albertcardona @ilennaj @kordinglab

Thanks for highlighting it Albert!

Question for Ilena & Konrad:

I just became aware of the research by Poirazi Lab's in Crete, which you cite, thus also goes into bio & ml models for dendrites.
Then any primer to get into how your respective research differ, compares, etc?

@ilennaj We are mostly an experimental lab, but dabble in computational models of circuits (Eg the 3K neuron model in Fig 4 here: http://lab.debivort.org/pdf/neural-correlates-of-individual-odor-preference-in-drosophila.pdf).
@ilennaj Synpased with you. One more dendrite to study with. Also really interested topic. I always wanted to know how neuronal sub-computational units function.