I'm currently developing a new course "Neuroscience for machine learners" that I hope to be able to make publicly available, and I'd love to hear what you think should be in it.

It's aimed at people with a machine learning background to learn a bit about neuroscience. My thinking is that neuroscience and ML have had fruitful links in the past, and may again in the future (although right now they're drifting apart). This course is designed to give students the background they'd need to be able to discover, understand and make use of new opportunities arising from neuroscience (if they do). I'm not trying to tell them only about the bits of neuroscience that we already think are applicable to ML, but to give them enough background to read and understand enough neuroscience to allow them to make new discoveries about what might be applicable to ML. The constraint is that it can't just be an intro to neuro course I think, because I'm not sure how compelling that would be to students with an ML focus. The course is 10 weeks and will have quite a practical focus, with most of the attention on weekly coding based exploratory group work rather than lectures. (Similar to @neuromatch Academy.)

I have thoughts about what should be on this course, but I'd love to know what you all think would be most relevant.

#neuroscience #compneuro #machinelearning #ai

@neuralreckoning @neuromatch In my opinion, the classic way to teach intro neuro is a bit reductionist bc the students spend weeks on molecular and biophysical principles (eg Hodgkin Huxley) before they get to systems neuro, if ever. This could be a good chance to invert that and start out with the kind of neuro that a lot of us love — large scale circuits and computations, based fundamentally on information theory and signal processing. Maybe focus some of the assignments around analyzing big datasets? Challenge with this approach might be to keep it grounded in behavior or biology and not just a data mining exercise

@chrisXrodgers @neuralreckoning @neuromatch

I agree that inverting the structure can work very well - certainly worked for me in a couple of short seminar I put together for more CS / AI oriented folks. E.g. using my input from the other comment chain, one could start with the higher cognition and move to neuromorphic later.

@chrisXrodgers @neuromatch oh that's an interesting idea! I definitely plan to get them to look at big datasets as this seems like a great way to keep it interesting and there are so many incredible open datasets now. I hadn't thought about inverting the structure but it could work really well for that audience.

@neuralreckoning @neuromatch

This sounds like a very exciting course! Obviously neuro is too broad to cover everything, so I think it is still important to somewhat pick content based on 'likelihood of being important in the future'. If I had to pick two directions which might be helpful for future developments in AI, I believe I would go for (1) Mechanisms of higher cognitions and (2) architectures at the foundation of neuromorphic computing.

@neuralreckoning @neuromatch

Re (1) - Higher Cognition:

There still is a lot of discussion that what is essentially missing in AI is some sort of higher cognition. This was well covered by this classic workshop paper https://baicsworkshop.github.io/pdf/BAICS_10.pdf -- we also wrote a piece recently for a AAAI workshop which has a similar line of reasoning but discussed them a bit more closely with regards to implementation and some existing ML architectures https://arxiv.org/abs/2303.13651

@neuralreckoning @neuromatch

Re (2) - Neuromorphic:

I still believe that neuromophic chips have the potential to significantly disrupt the accelerator market in the future. So I think explaining what is behind the spiking neuron models, gradient descent problem in spiking nets etc is relevant. You obviously are an expert in all these things!

@neuralreckoning @neuromatch

As a side note, I do think that if its mostly computing scientists taking the class that mentioning that even folks like von Neumann already were very interested in the brain (https://en.wikipedia.org/wiki/The_Computer_and_the_Brain) before we ever thought about simulating large scale neural networks on computers.

Let me know if you want to discuss anything in more detail. Would be super happy to contribute if you pick up any of those directions.

The Computer and the Brain - Wikipedia

@achterbrain @neuromatch I'll have to read that! Thanks.

@neuralreckoning @achterbrain @neuromatch
Also, the paper where he lays out what's now called the von Neumann architecture only cites one source: MacCulloch and Pitts' "A logical calculus of the ideas immanent in nervous activity." Most of the theory for the architecture is explicitly neuro-inspired

https://web.mit.edu/STS.035/www/PDFs/edvac.pdf

@axoaxonic @neuralreckoning @achterbrain @neuromatch Highly recommend Piccinini’s papers on this. He carefully traces the history and implications!
@dbarack I definitely will. I got a lot out of his Physical Computation book and wanted to read more of his writing anyways
@axoaxonic @dbarack I did not know about Piccinini's book, thanks for the pointer! Will try to tackle that soon!

@achterbrain @neuralreckoning @neuromatch Though he also said that trying to understand the brain using the techniques
of neurology was like trying to understand the ENIAC computer “with no
instrument… smaller than about 2 feet across its critical organs, with no methods of intervention more delicate than playing with a fire hose…”

http://www.ehudlamm.com/outsiders.pdf

@achterbrain @neuromatch oh there'll definitely be a large part on this. Given it's me I could hardly not mention this! 😂
@achterbrain @neuromatch mix of likely to be important and how useful it is to unlock understanding of the things we haven't anticipated.

@neuralreckoning @neuromatch

A couple that I discuss in my #PhilosophyOfAI seminar may be relevant:

Lake, Brenden M., Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. 2017. “Building Machines That Learn and Think like People.” Behavioral and Brain Sciences 40.

Hassabis, Demis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick. 2017. “Neuroscience-Inspired Artificial Intelligence.” Neuron 95 (2): 245–58.

Srinivasan et al 2018. “Deep(Er) Learning.”

@neuralreckoning @neuromatch having taught a Computational neuroscience course to CS undergrads, my view is that they can learn a lot just by comparing how brains compute vs human-made machine. So I think 1-2 lectures on "brains vs computers" could be quite horizon-expanding for them.

Eg brains don't use von-Neumann architecture, use spikes, analog computing, etc. even DNNs don't capture a lot of brain compute mechanisms, etc.

@cian @neuromatch yes, I like that! Only thing is that we don't know how brains compute and I don't want to pollute their minds too much with our current (almost surely wrong) ideas of how it works. But I think there's plenty that can be done along the lines you're suggesting despite that!
@neuralreckoning @neuromatch haha fair enough. Nevertheless if it's just an example of "another type of computer" then it's not too much harm if ideas are ultimately wrong.

@neuralreckoning @neuromatch Pre-pandemic we had this course here: https://www.inf.ed.ac.uk/teaching/courses/nip/

It was originally developed by Chris Williams and Mark van Rossum. The idea was to cover material at the intersection between ML and neuro, and there’s plenty to choose from. The selection of topics evolved every year, for example deep nets came in as an addition to HMAX. I’d like to teach that again at some point.

Neural Information Processing: Course homepage

@neuromatch @neuralreckoning 'How does learning work in the brain?'
@FMarquardtGroup @neuromatch if only we knew!
@neuralreckoning @neuromatch Good to know that the experts don't know :) I myself came only as far as learning about 'neurons that fire together wire together', which I liked as a local simple learning rule. Unfortunately, I didn't find that much literature on large NN being trained with that rule, or maybe this was given up some time ago? Would be great if a rule as simple as that, plus maybe some global reward signal, could explain everything (as a theor. physicist, I prefer simple answers)...
@FMarquardtGroup @neuromatch that was the hope! Unfortunately it seems like it's not going to be such a simple answer in the end. Sigh.