🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

🌍 https://doi.org/10.1038/s41592-025-02895-w

#Neuroscience #CompNeuro

🧠 New preprint by Codol et al. (2025): Brain-like #NeuralDynamics for #behavioral control develop through #ReinforcementLearning. They show that only #RL, not #SupervisedLearning, yields neural activity geometries & dynamics matching monkey #MotorCortex recordings. RL-trained #RNNs operate at the edge of #chaos, reproduce adaptive reorganization under #visuomotor rotation, and require realistic limb #biomechanics to achieve brain-like control.

🌍 https://doi.org/10.1101/2024.10.04.616712

#CompNeuro #Neuroscience

#ITByte: The #MachineLearning models having sequential data as input or output are called #SequenceModels.

It includes text streams, video clips, audio clips, time-series data, etc. Recurrent Neural Networks (#RNNs) and Long Short-Term Memory(#LSTM) are popular algorithms used in sequence models.

https://knowledgezone.co.in/trends/explorer?topic=Sequence-Model

Showing my love for RNNs and functional programming by implementing Mamba2 with Elixir. Wish me luck! #Elixir #LLMs #functionalprogramming #rnns

Learning better with Dale’s Law: A Spectral Perspective - #NeurIPS2023 contribution by Li et al. (2023). It discusses how to train brainlike #RNNs with separate inhibitory and excitatory units with similar performance as standard RNNs:

🌍 https://openreview.net/forum?id=rDiMgZulwi

#RNN #DalesLaw #CompNeuro #Neuroscience

Learning better with Dale’s Law: A Spectral Perspective

Most recurrent neural networks (RNNs) do not include a fundamental constraint of real neural circuits: Dale's Law, which implies that neurons must be excitatory (E) or inhibitory (I). Dale's Law is...

OpenReview
1997 with the advent of Long Short-Term Memory recurrent #neuralnetworks marks the subsequent step in our brief history of )large) #languagemodels from last week's #ise2023 lecture. Introduced by Sepp Hochreiter and JΓΌrgen Schmidhuber #LSTM #RNNs enabled efficient processing of sequences of data.
Slides: https://drive.google.com/file/d/1atNvMYNkeKDwXP3olHXzloa09S5pzjXb/view?usp=drive_link
#nlp #llm #llms #ai #artificialintelligence #lecture @fizise
ISE2023 - 13 - ISE Applications.pdf

Google Docs

Simplifying and Understanding State Space Models with Diagonal Linear RNNs

https://openreview.net/forum?id=YrBFZ2egXv

#attention #rnns #learns

Simplifying and Understanding State Space Models with Diagonal...

Sequence models based on linear state spaces (SSMs) have recently emerged as a promising choice of architecture for modeling long range dependencies across various modalities. However, they...

OpenReview

#ITByte: The #MachineLearning models having sequential data as input or output are called #SequenceModels.

It includes text streams, video clips, audio clips, time-series data, etc. Recurrent Neural Networks (#RNNs) and Long Short-Term Memory(#LSTM) are popular algorithms used in sequence models.

https://knowledgezone.co.in/trends/explorer?topic=Sequence-Model

Your Gateway to Knowledge

Knowledge Zone - Social Knowledge Sharing Platform

Knowledge Zone

'Minimal Width for Universal Property of Deep RNN', by Chang hoon Song, Geonho Hwang, Jun ho Lee, Myungjoo Kang.

http://jmlr.org/papers/v24/22-1191.html

#rnns #rnn #deep

Minimal Width for Universal Property of Deep RNN

Investigating Action Encodings in Recurrent Neural Networks in Reinforcement Learning

Matthew Kyle Schlegel, Volodymyr Tkachuk, Adam M White, Martha White

https://openreview.net/forum?id=K6g4MbAC1r

#rnns #rnn #reinforcement

Investigating Action Encodings in Recurrent Neural Networks in...

Building and maintaining state to learn policies and value functions is critical for deploying reinforcement learning (RL) agents in the real world. Recurrent neural networks (RNNs) have become a...

OpenReview