Jeremy Manning

@jeremy@neuromatch.social
650 Followers
186 Following
22 Posts

Contextual Dynamics Lab director, Dartmouth prof, memory & 🧠 network modeler, data scientist, dad, husband, tree hugger 🌲, cat lover, & 🧁+ 🍪 baker

https://www.context-lab.com

Academic interestsNeuroscience, psychology, memory, networks, ed tech
Research flavorscomputational, natural language processing, data science, visualization
OtherDad, husband, hiker, runner, tree hugger, cat lover, baker
AffiliationsDartmouth College, Contextual Dynamics Laboratory
Lab websitehttps://www.context-lab.com/
GitHubhttps://github.com/jeremymanning
Twitterhttps://twitter.com/jeremyRmanning
LinkedInhttps://www.linkedin.com/in/jeremy-manning-0075a477/
Pronounshe/him/his

🚨New preprint alert!🚨

We (lead author: Lucy Owen) used the Hasson Lab's “Pie Man” dataset (Simony et al., 2016) to explore how “informative” and “compressible” brain activity patterns are during intact/scrambled story listening.

Preprint: https://biorxiv.org/cgi/content/short/2023.03.17.533152v1
Code/data: https://github.com/ContextLab/pca_paper

The core idea is that more “informative” brain patterns should yield higher classification accuracy. More “compressible” brain patterns should yield higher accuracy for a fixed number of features. We’re interested in tradeoffs between the two, under different circumstances.

Brain activity from people listening to the unscrambled story was both more informative overall *and* more compressible than activity during scrambled listening or during rest.

As the story progresses, these patterns get stronger! After listening for a while, activity evoked by the intact story or coarse scrambling becomes even *more* informative and compressible, whereas finely scrambled/rest activity becomes *less* informative and compressible.

We also zoomed in on specific networks. Activity from higher-order brain areas was generally more informative than from lower-order areas, but we didn’t see any obvious differences in compressibility across networks.

We did some interesting exploratory things too, using a combination of @neurosynth and ChatGPT to help understand what different patterns we found might “mean” from a functional perspective.

Taken together, our work suggests that our brain networks flexibly reconfigure according to ongoing task demands: activity patterns associated w/ higher-order cognition and high engagement are more informative and compressible than patterns evoked by lower-order tasks.

Check out our new preprint on learning from Khan Academy videos (co-authors: @paxton and Andy Heusser)! We use text embeddings to model and map what people know and how knowledge changes over time.

Preprint: https://psyarxiv.com/dh3q2
Code/data: https://github.com/ContextLab/efficient-learning-khan

What we experience in the current moment tells us about *now*-– but what does it tell us about the past or future? And does the current moment tell us *more* about the past or about the future?

Historically, the statistical learning literature has tended to study these sorts of questions using highly simplified lab-created sequences (e.g., Markov processes). Statistically, these sequences are temporally symmetric. Behaviorally, people are just as good at predicting unknown past and future states, given observations in the present.

But in our own lives, we have memories of the past but not the future, imposing an "arrow of time" on our subjective experiences known as the "psychological arrow of time." This means we know more about our own pasts than our own futures. (We often take this for granted, even though most laws of physics are temporally symmetric!)

We (@xxming, Ziyan Zhu, and I) were curious: in *other* people's lives, where the past and future are equally unknown (and unremembered), are our inferences symmetric (like in typical statistical learning studies) or asymmetric (like for our own lives)?

We ran a study to test this, and we found something kind of neat: it turns out the psychological arrow of time is communicable to other people through conversation! Essentially, what people say is influenced by what they know. And since each person knows more about their own past, this asymmetry is picked up by other people.

We think there are all sorts of interesting implications here about how we communicate our own biases and knowledge asymmetries to other people. @xxming also has some really mind-blowing ideas about how an *asymmetric* law of physics (the second law of thermodynamics) might help explain the psychological arrow of time and some other fundamental properties of memory. (We're planning to write up an opinion paper about these ideas later.)

We hope you'll check out our preprint, send along some thoughts, questions, constructive criticisms, etc.!

#preprint: https://psyarxiv.com/yp2qu/
#code and #data: https://github.com/ContextLab/prediction-retrodiction-paper

#memory #inference #narratives #conversation

"Describe to an expert how a large language model named 'chatGP-B' might work by leveraging a swarm of highly trained intelligent bees to process queries and generate text-based responses."

🐝🐝🐝