I'm currently developing a new course "Neuroscience for machine learners" that I hope to be able to make publicly available, and I'd love to hear what you think should be in it.

It's aimed at people with a machine learning background to learn a bit about neuroscience. My thinking is that neuroscience and ML have had fruitful links in the past, and may again in the future (although right now they're drifting apart). This course is designed to give students the background they'd need to be able to discover, understand and make use of new opportunities arising from neuroscience (if they do). I'm not trying to tell them only about the bits of neuroscience that we already think are applicable to ML, but to give them enough background to read and understand enough neuroscience to allow them to make new discoveries about what might be applicable to ML. The constraint is that it can't just be an intro to neuro course I think, because I'm not sure how compelling that would be to students with an ML focus. The course is 10 weeks and will have quite a practical focus, with most of the attention on weekly coding based exploratory group work rather than lectures. (Similar to @neuromatch Academy.)

I have thoughts about what should be on this course, but I'd love to know what you all think would be most relevant.

#neuroscience #compneuro #machinelearning #ai

@neuralreckoning @neuromatch

This sounds like a very exciting course! Obviously neuro is too broad to cover everything, so I think it is still important to somewhat pick content based on 'likelihood of being important in the future'. If I had to pick two directions which might be helpful for future developments in AI, I believe I would go for (1) Mechanisms of higher cognitions and (2) architectures at the foundation of neuromorphic computing.

@neuralreckoning @neuromatch

Re (1) - Higher Cognition:

There still is a lot of discussion that what is essentially missing in AI is some sort of higher cognition. This was well covered by this classic workshop paper https://baicsworkshop.github.io/pdf/BAICS_10.pdf -- we also wrote a piece recently for a AAAI workshop which has a similar line of reasoning but discussed them a bit more closely with regards to implementation and some existing ML architectures https://arxiv.org/abs/2303.13651

@neuralreckoning @neuromatch

Re (2) - Neuromorphic:

I still believe that neuromophic chips have the potential to significantly disrupt the accelerator market in the future. So I think explaining what is behind the spiking neuron models, gradient descent problem in spiking nets etc is relevant. You obviously are an expert in all these things!

@neuralreckoning @neuromatch

As a side note, I do think that if its mostly computing scientists taking the class that mentioning that even folks like von Neumann already were very interested in the brain (https://en.wikipedia.org/wiki/The_Computer_and_the_Brain) before we ever thought about simulating large scale neural networks on computers.

Let me know if you want to discuss anything in more detail. Would be super happy to contribute if you pick up any of those directions.

The Computer and the Brain - Wikipedia

@achterbrain @neuromatch I'll have to read that! Thanks.

@neuralreckoning @achterbrain @neuromatch
Also, the paper where he lays out what's now called the von Neumann architecture only cites one source: MacCulloch and Pitts' "A logical calculus of the ideas immanent in nervous activity." Most of the theory for the architecture is explicitly neuro-inspired

https://web.mit.edu/STS.035/www/PDFs/edvac.pdf

@axoaxonic @neuralreckoning @achterbrain @neuromatch Highly recommend Piccinini’s papers on this. He carefully traces the history and implications!
@dbarack I definitely will. I got a lot out of his Physical Computation book and wanted to read more of his writing anyways
@axoaxonic @dbarack I did not know about Piccinini's book, thanks for the pointer! Will try to tackle that soon!