Adel Ardalan

125 Followers
290 Following
63 Posts
Computer scientist by training, aspiring computational neuroscientist | Postdoc at Princeton Neuroscience Institute | UW-Madison CS & Columbia Zuckerman Institute alumni 🧠🤖💿

📣 If you are attending the SfN meeting in DC, don’t miss our nanosymposium co-organized by @[email protected] and myself on geometry of task representations in biological and artificial neural networks 📐+🧠

⏰ November 15, 2023, 1:00 PM - 3:30 PM, Room WCC 146C

Check out our fantastic line of speakers here: https://t.co/68XiOdaUJ9

@postlelab @LibedinskyLab @gmishne @ChengXue

For RNNs, and several previous fMRI and EEG studies, higher-order context operationalized as priority. Behavior is differently sensitive to variation in neural measures of encoding efficacy of these two types of context, and they are differently distributed in the brain.

📣📣📣 On behalf of @postlelab

Representing context and priority in working memory https://www.biorxiv.org/content/10.1101/2023.10.24.563608v1
RNNs represent 1st-order context (that individuates an item) and higher-order cntxt (that can change unpredictably ) via distinct mechanisms. Quan Wan @adel and Jacqueline Fulvio

@NeuralEnsemble @barbosa @lowrank_adrian @ShahabBakht @roydanroy Jon Cohen’s group are actively working on similar ideas, e.g. ESBN: https://arxiv.org/pdf/2012.14601.pdf

@chrisXrodgers @StefanoFusi #NeuroPaperThread #NeuroNewPaper

11) Also, I am on the job market this year, so please do not hesitate to reach out if you think I could be a good fit for a theoretical/computational faculty position in your department!

#NeuroPaperThread #NeuroNewPaper

1) Our article “The geometry of cortical representations of touch in rodents” with @chrisXrodgers Randy Bruno and @StefanoFusi is finally out! In brief, we found that whisker contacts in mice S1 are represented in approximately orthogonal subspaces https://www.nature.com/articles/s41593-022-01237-9 🧵​👇​

The geometry of cortical representations of touch in rodents - Nature Neuroscience

Mice were trained to discriminate objects using their whiskers. The geometry of the neural representations recorded in somatosensory cortex was disentangled with small non-linear perturbations, allowing for generalization and flexibility.

Nature
@achterbrain @taylorwwebb Taylor, Jon and others on their team are doing excellent inquiries and examinations along these lines. Glad it was helpful. :)
@taylorwwebb @achterbrain Thoughts this might pique your interest.

Here's something to kick things off over here: in a new paper, we found that GPT-3 matches or exceeds human performance on zero-shot analogical reasoning, including on a text-based version of Raven's Progressive Matrices.

https://arxiv.org/abs/2212.09196v1

Emergent Analogical Reasoning in Large Language Models

The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here, we performed a direct comparison between human reasoners and a large language model (the text-davinci-003 variant of GPT-3) on a range of analogical tasks, including a novel text-based matrix reasoning task closely modeled on Raven's Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.

arXiv.org

The "generalizability" problem is easy to grasp: very few research samples and first authors in the beheavioral sciences are from the Global South

image 1: world map scaled by population

image 2: world map scaled by published research