@root The closest technologies we have to how the human brain works are not LLMs, but some less well-known ones: reinforcement learning algorithms and hyperdimensional computing. If you want to see what HDC is capable of, check out this video:

https://youtu.be/P_WRCyNQ9KY?si=JgAuOJQmsQ6tVIiO

#HDC #HyperdimensionalComputing
#VSA #VectorSymbolicArchitecture
#HRR #HolographicReducedRepresentation
#SpikingNeuralNetworks
#AGI #ArtificialGeneralIntelligence
#LLM

Spaun brain activity and decoding for sample tasks

YouTube

Word cloud of abstracts we've received for #SNUFA #SpikingNeuralNetworks conference 2024. Register (free) by tomorrow afternoon UTC if you want to take part in selecting which abstracts get offered talk slots at the workshop!

https://snufa.net/2024/

#neuroscience

SNUFA 2024

Spiking Neural networks as Universal Function Approximators

SNUFA

We got 50% more submissions this year for the #SNUFA #SpikingNeuralNetworks conference compared to last year: thanks! ❤️

We will shortly send out to registered participants a survey to allow you to take part in the approval voting scheme that will decide which abstracts we select as talks.

Register soon if you want to take part!

https://snufa.net/2024/

SNUFA 2024

Spiking Neural networks as Universal Function Approximators

SNUFA

Submit your abstracts for the #SNUFA #SpikingNeuralNetworks conference by tomorrow The conference is free, online and usually has around 700 highly engaged participants. Talks are selected by participant interest.

Please do signal boost this!

https://snufa.net/2024/

#compneuro #neuroscience

SNUFA 2024

Spiking Neural networks as Universal Function Approximators

SNUFA
  • Extends the HOTS algorithm to increase its performance by adding a homeostatic gain control on the activity of neurons to improve the learning of spatio-temporal patterns, we prove an analogy with off-the-shelf LIF #SpikingNeuralNetworks

This needs a hand clap ! 👏

New preprint on our "collaborative modelling of the brain" (COMOB) project. Over the last two years, a group of us (led by @marcusghosh) have been working together, openly, online, with anyone free to join, on a computational neuroscience research project

https://www.biorxiv.org/content/10.1101/2024.07.19.604252v1

This was an experiment in a more bottom up, collaborative way of doing science, rather than the hierarchical PI-led model. So how did we do it?

We started from the tutorial I gave at @CosyneMeeting 2022 on spiking neural networks that included a starter Jupyter notebook that let you train a spiking neural network model on a sound localisation task.

https://neural-reckoning.github.io/cosyne-tutorial-2022/

https://www.youtube.com/watch?v=GTXTQ_sOxak&list=PL09WqqDbQWHGJd7Il3yVxiBts5nRSxvJ4&index=1

Participants were free to use and adapt this to any question they were interested in (we gave some ideas for starting points, but there was no constraint). Participants worked in groups or individually, sharing their work on our repository and joining us for monthly meetings.

The repository was set up to automatically build a website using @mystmarkdown showing the current work in progress of all projects, and (later in the project) the paper as we wrote it. This kept everyone up to date with what was going on.

https://comob-project.github.io/snn-sound-localization/

We started from a simple feedforward network of leaky integrate-and-fire neurons, but others adapted it to include learnable delays, alternative neuron models, biophysically detailed models, incorporated Dale's law, etc.

We found some interesting results, including that shorter time constants improved performance (consistent with what we see in the auditory system). Surprisingly, the network seemed to be using an "equalisation cancellation" strategy rather than the expected coincidence detection.

Ultimately, our scientific results were not incredibly strong, but we think this was a valuable experiment for a number of reasons. Firstly, it shows that there are other ways of doing science. Secondly, many people got to engage in a research experience they otherwise wouldn't. Several participants have been motivated to continue their work beyond this project. It also proved useful for generating teaching material, and a number of MSc projects were based on it.

With that said, we learned some lessons about how to do this better, and yes, we will be doing this again (call for participation in September/October hopefully). The main challenge will be to keep the project more focussed without making it top down / hierarchical.

We believe this is possible, and we are inspired by the recent success of the Busy Beaver challenge, a bottom up project of mathematics amateurs that found a proof to a 40 year old conjecture.

https://www.quantamagazine.org/amateur-mathematicians-find-fifth-busy-beaver-turing-machine-20240702/

We will be calling for proposals for the next project, engaging in an open discussion with all participants to refine the ideas before starting, and then inviting the proposer of the most popular project to act as a 'project lead' keeping it focussed without being hierarchical.

If you're interested in being involved in that, please join our (currently fairly quiet) new discord server, or follow me or @marcusghosh for announcements.

https://discord.gg/kUzh5MHjVE

I'm excited for a future where scientists work more collaboratively, and where everyone can participate. Diversity will lead to exciting new ideas and progress. Computational science has huge potential here, something we're also pursuing at @neuromatch.

Let's make it happen!

#neuroscience #computationalscience #computationalneuroscience #compneuro #science #metascience #SpikingNeuralNetworks #auditory

Could we decide if a simulated spiking neural network uses spike timing or not? Given that we have full access to the state of the network and can simulate perturbations. Ideas for how we could decide? Would everyone agree? #neuroscience #SpikingNeuralNetworks #computationalneuroscience #compneuro

In 2000, Nicolas Brunel presented a framework for studying sparsely connected #SpikingNeuralNetworks (#SNN) with random connectivity & varied excitation-inhibition balance. The model, characterized by high sparseness & low firing rates, captures diverse neural dynamics such as synchronized regular and asynchronous irregular activity and global oscillations. Here is a brief summary of these concepts & a #PythonTuroial using the #NESTsimulator.

🌍 https://www.fabriziomusacchio.com/blog/2024-07-21-brunel_network/
#CompNeuro #Neuroscience

Brunel network: A comprehensive framework for studying neural network dynamics

In his work from 2000, Nicolas Brunel introduced a comprehensive framework for studying the dynamics of sparsely connected networks. The network is based on spiking neurons with random connectivity and differently balanced excitation and inhibition. It is characterized by a high level of sparseness and a low level of firing rates. The model is able to reproduce a wide range of neural dynamics, including both synchronized regular and asynchronous irregular activity as well as global oscillations. In this post, we summarize the essential concepts of that network and replicate the main results using the NEST simulator.

Fabrizio Musacchio

SPIKING NEURAL NETWORKS!

If you love them, join us at SNUFA24. Free, online workshop, Nov 5-6 (2-6pm CET). Usually ~700 participants.

Invited speakers: Chiara Bartolozzi, David Kappel, Anna Levina, Christian Machens

Posters + 8 contributed talks selected by participant vote.

Abstract submission is quick and easy (300 word max), and now open until the deadline Sept 27.

Registration is free, but mandatory.

Hope to see you there!

https://snufa.net/2024/

#SpikingNeuralNetworks #neuroscience #computationalneuroscience #neuromorphic #neuromorphiccomputing #Neuromorphicengineering

SNUFA 2024

Spiking Neural networks as Universal Function Approximators

SNUFA

The #NEST #simulator is a powerful software for simulating large-scale #SpikingNeuralNetworks (#SNN). I’ve composed an introductory #tutorial showing the main commands for getting started. It's applied to examples with single neurons to reduce complexity. Feel free to share:

🌍 https://www.fabriziomusacchio.com/blog/2024-06-16-nest_single_neuron_example/

#CompNeuro #ComputationalNeuroscience #Python #PythonTutorial #NESTSimulator

Step-by-step NEST single neuron simulation

While NEST is designed for large-scale simulations of neural spike networks, the underlying models are based on approximating the behavior of single neurons and synapses. Before using NEST for network simulations, it is probably helpful to first understand the basic functions of the software tool by modelling and studying the behavior of individual neurons. In this tutorial, you will learn about NEST’s concept of nodes and connections, how to set up a neuron model of your choice, how to change model parameters, which different stimulation paradigms are included in NEST and how to record and analyze the simulation results.

Fabrizio Musacchio