267 Followers
224 Following
156 Posts
Secret Panel HERE ❄ https://tapas.io/episode/1607651
Read Mr. Lovenstein :: Cool Fate. | Tapas Community

Read Mr. Lovenstein and more premium Comedy Community series now on Tapas!

Read Mr. Lovenstein :: Cool Fate.

📢🧠🤖 Thrilled to share new work with Taylor Webb & Shanka Mondal:

A Prefrontal Cortex-inspired Architecture for Planning in Large Language Models
https://lnkd.in/g-PhNfMT

LLMs struggle with multi-step reasoning and planning.

We propose a solution inspired by human brain function: planning via recurrent interactions of prefrontal cortex (PFC) subregions.

The modular architecture with multiple LLM calls substantially improves performance on Graph traversal & Tower of Hanoi.
1/n 🧵

LinkedIn

This link will take you to a page that’s not on LinkedIn

New (updated) #preprint!

Defining #neural #modularity is hard: much history. We used toy ANNs to show structural and functional definitions not tightly related, resource constraints important, and we need to start thinking about temporal dynamics.

🧵 with @GabrielBena #neuroscience

https://arxiv.org/abs/2106.02626

Dynamics of specialization in neural modules under resource constraints

It has long been believed that the brain is highly modular both in terms of structure and function, although recent evidence has led some to question the extent of both types of modularity. We used artificial neural networks to test the hypothesis that structural modularity is sufficient to guarantee functional specialization, and find that in general, this doesn't necessarily hold. We then systematically tested which features of the environment and network do lead to the emergence of specialization. We used a simple toy environment, task and network, allowing us precise control, and show that in this setup, several distinct measures of specialization give qualitatively similar results. We further find that in this setup (1) specialization can only emerge in environments where features of that environment are meaningfully separable, (2) specialization preferentially emerges when the network is strongly resource-constrained, and (3) these findings are qualitatively similar across the different variations of network architectures that we tested, but that the quantitative relationships depend on the precise architecture. Finally, we show that functional specialization varies dynamically across time, and demonstrate that these dynamics depend on both the timing and bandwidth of information flow in the network. We conclude that a static notion of specialization, based on structural modularity, is likely too simple a framework for understanding intelligence in situations of real-world complexity, from biology to brain-inspired neuromorphic systems. We propose that thoroughly stress testing candidate definitions of functional modularity in simplified scenarios before extending to more complex data, network models and electrophysiological recordings is likely to be a fruitful approach.

arXiv.org

New preprint! A simple way to extend the classical evidence weighting model of multimodal integration to solve a much wider range of naturalistic tasks. Spoiler: it's nonlinearity. Works for SNNs/ANNs. 🧵 with @marcusghosh, Gabriel Béna, Volker Bormuth

https://www.biorxiv.org/content/10.1101/2023.07.24.550311v1

#neuroscience #compneuro #SpikingNeuralNetworks #preprint

Super cool overview on the design on of #neuromorphic #computing systems:

Bottom-Up and Top-Down Approaches for the Design of Neuromorphic Processing Systems: Tradeoffs and Synergies Between Natural and Artificial Intelligence

By Charlotte Frenkel, David Bol & @giacomoi

https://ieeexplore.ieee.org/document/10144567

#neuroscience #AI #ML #silicon

Bottom-Up and Top-Down Approaches for the Design of Neuromorphic Processing Systems: Tradeoffs and Synergies Between Natural and Artificial Intelligence

While Moore’s law has driven exponential computing power expectations, its nearing end calls for new avenues for improving the overall system performance. One of these avenues is the exploration of alternative brain-inspired computing architectures that aim at achieving the flexibility and computational efficiency of biological neural processing systems. Within this context, neuromorphic engineering represents a paradigm shift in computing based on the implementation of spiking neural network architectures in which processing and memory are tightly colocated. In this article, we provide a comprehensive overview of the field, highlighting the different levels of granularity at which this paradigm shift is realized and comparing design approaches that focus on replicating natural intelligence (bottom-up) versus those that aim at solving practical artificial intelligence applications (top-down). First, we present the analog, mixed-signal, and digital circuit design styles, identifying the boundary between processing and memory through time multiplexing, in-memory computation, and novel devices. Then, we highlight the key tradeoffs for each of the bottom-up and top-down design approaches, survey their silicon implementations, and carry out detailed comparative analyses to extract design guidelines. Finally, we identify necessary synergies and missing elements required to achieve a competitive advantage for neuromorphic systems over conventional machine-learning accelerators in edge computing applications and outline the key ingredients for a framework toward neuromorphic intelligence.

Soooo Twitter doing its things again and I’m wondering if we’re really moving here for good or not.

@domhenri @MolemanPeter @Andrew_Hardaway @kordinglab

Not that I know of; more than a review it would have to be a book, really, detailing the inner workings of the C. elegans connectome.

Some recent works I've seen:

"Parallel multimodal circuits control an innate foraging behavior", López-Cruz et al. 2019 from @CoriBargmann 's lab

"Forward and backward locomotion patterns in C. elegans generated by a connectome-based model simulation", Sakamoto et al. 2021
https://www.nature.com/articles/s41598-021-92690-2

"Learning the dynamics of realistic models of C. elegans nervous system with recurrent neural networks", Barbulescu et al. 2023
https://www.nature.com/articles/s41598-022-25421-w

See also pretty much all other papers from Cori Bargmannn's lab: https://scholar.google.com/citations?hl=en&user=Wd7XWVYAAAAJ&view_op=list_works&sortby=pubdate

#neuroscience #Celegans #connectomics

Forward and backward locomotion patterns in C. elegans generated by a connectome-based model simulation - Scientific Reports

Caenorhabditis elegans (C. elegans) can produce various motion patterns despite having only 69 motor neurons and 95 muscle cells. Previous studies successfully elucidate the connectome and role of the respective motor neuron classes related to movement. However, these models have not analyzed the distribution of the synaptic and gap connection weights. In this study, we examined whether a motor neuron and muscle network can generate oscillations for both forward and backward movement and analyzed the distribution of the trained synaptic and gap connection weights through a machine learning approach. This paper presents a connectome-based neural network model consisting of motor neurons of classes A, B, D, AS, and muscle, considering both synaptic and gap connections. A supervised learning method called backpropagation through time was adapted to train the connection parameters by feeding teacher data composed of the command neuron input and muscle cell activation. Simulation results confirmed that the motor neuron circuit could generate oscillations with different phase patterns corresponding to forward and backward movement, and could be switched at arbitrary times according to the binary inputs simulating the output of command neurons. Subsequently, we confirmed that the trained synaptic and gap connection weights followed a Boltzmann-type distribution. It should be noted that the proposed model can be trained to reproduce the activity patterns measured for an animal (HRB4 strain). Therefore, the supervised learning approach adopted in this study may allow further analysis of complex activity patterns associated with movements.

Nature

The big logical gap within systems neuroscience
Neuroscientists make causal statements about brains. “We want to understand how the brain works, how it computes, how to fix it.” The relevant causal problems are high dimensional with billions of neurons nonlinearly affecting one another. We can also measure low-dimensional causal effects, e.g. by perturbing some neurons electrically or optically. However, we are currently mostly perturbing at most a few dimensions out of billions of neurons. When it comes to bigger datasets, we measure correlations with outside stimuli (tuning), and correlations within the brain. We can measure higher-order correlations.

Here is the central gap in computational neuroscience. We demand high-dimensional causal statements but we can only either provide low-dimensional causal statements or high dimensional correlational ones. So we need a glue that links what we can actually do with what we desire. There are two glues used by the field. (1) We assume that the brain is simple, consider a small hypothesis space, do hypothesis testing, and then assume that causality works as in our hypothesis. This logic is wrong in high-dimensional hypothesis spaces. (2) Alternatively we assume that correlation is causation. Neither glues work.

But instead of fixing the central logical gap, we keep obfuscating it with complicated statistics, complicated words, and complicated experiments. But none of these fixes the gap.

Thoughts?

LLM-lords hide behind their holy trinity shield below thinking it saves their soul (fig by me and @andrea) 👀

What Might Cognition Be, If Not Computation?

Rather than computers, cognitive systems may be dynamical systems; rather than computation, cognitive processes may be state-space evolution within these very different kinds of system

With a wonderful illustration via "The Governing Problem"

Tim Van Gelder, 1995

https://www.jstor.org/stable/2941061