Jeremy Manning

@jeremy@neuromatch.social
650 Followers
186 Following
22 Posts

Contextual Dynamics Lab director, Dartmouth prof, memory & 🧠 network modeler, data scientist, dad, husband, tree hugger 🌲, cat lover, & 🧁+ 🍪 baker

https://www.context-lab.com

Academic interestsNeuroscience, psychology, memory, networks, ed tech
Research flavorscomputational, natural language processing, data science, visualization
OtherDad, husband, hiker, runner, tree hugger, cat lover, baker
AffiliationsDartmouth College, Contextual Dynamics Laboratory
Lab websitehttps://www.context-lab.com/
GitHubhttps://github.com/jeremymanning
Twitterhttps://twitter.com/jeremyRmanning
LinkedInhttps://www.linkedin.com/in/jeremy-manning-0075a477/
Pronounshe/him/his
Computational linguistics folks: what's the state-of-the-art way to tag tenses on a sentence-by-sentence basis? I want to be able to count up the number of past and future tense uses in ~1M sentences. Any tips or pointers would be much appreciated! 🙏

Check out our new preprint with Luke Chang's group
(lead: @eshjolly)! Our results from a movie watching social memory task suggest that we naturally represent and remember people by their connections with others 🧑‍🤝‍🧑🧑👪💕

Paper: https://psyarxiv.com/bw9r2
Code/data: https://github.com/ejolly/social_mem

Excited to release our revamped "Intro to Programming for Psych Scientists" *free* MOOC! Link: https://github.com/ContextLab/cs-for-psych

New stuff/updates include data viz, pandas, scikit-learn, model design + model building, NLP, time-frequency analysis, timeseries prediction, and more!

🤓🧑‍💻

GitHub - ContextLab/cs-for-psych: Course materials for PSYC 132: Introduction to Programming for Psychological Scientists

Course materials for PSYC 132: Introduction to Programming for Psychological Scientists - GitHub - ContextLab/cs-for-psych: Course materials for PSYC 132: Introduction to Programming for Psychologi...

GitHub
Applications are now open for the
Dartmouth MIND summer school! This year’s theme is “interacting minds” and comes with a 🤩 faculty lineup. For application instructions + more info, check out http://mindsummerschool.org
Home

Methods in Neuroscience at Dartmouth Computational Summer School

🚨New preprint alert!🚨

We (lead author: Lucy Owen) used the Hasson Lab's “Pie Man” dataset (Simony et al., 2016) to explore how “informative” and “compressible” brain activity patterns are during intact/scrambled story listening.

Preprint: https://biorxiv.org/cgi/content/short/2023.03.17.533152v1
Code/data: https://github.com/ContextLab/pca_paper

The core idea is that more “informative” brain patterns should yield higher classification accuracy. More “compressible” brain patterns should yield higher accuracy for a fixed number of features. We’re interested in tradeoffs between the two, under different circumstances.

Brain activity from people listening to the unscrambled story was both more informative overall *and* more compressible than activity during scrambled listening or during rest.

As the story progresses, these patterns get stronger! After listening for a while, activity evoked by the intact story or coarse scrambling becomes even *more* informative and compressible, whereas finely scrambled/rest activity becomes *less* informative and compressible.

We also zoomed in on specific networks. Activity from higher-order brain areas was generally more informative than from lower-order areas, but we didn’t see any obvious differences in compressibility across networks.

We did some interesting exploratory things too, using a combination of @neurosynth and ChatGPT to help understand what different patterns we found might “mean” from a functional perspective.

Taken together, our work suggests that our brain networks flexibly reconfigure according to ongoing task demands: activity patterns associated w/ higher-order cognition and high engagement are more informative and compressible than patterns evoked by lower-order tasks.

Check out our new preprint on learning from Khan Academy videos (co-authors: @paxton and Andy Heusser)! We use text embeddings to model and map what people know and how knowledge changes over time.

Preprint: https://psyarxiv.com/dh3q2
Code/data: https://github.com/ContextLab/efficient-learning-khan

Several years ago I came up with a 'retirement' project: I would ask my long-time friends/colleagues in the hippocampus world to answer three questions about what got them interested in the hippocampus, what findings other than their own were most exciting to them, and what advice they might have for young researchers, and I would compile their answers into a book, or something like that. My own laziness and COVID derailed the grand project, but I did gather input from a handful of colleagues and what I have decided I might do now is 'publish' them on Mastodon piece by piece. I have material from Phil Best and Carolyn Harley, both of whom passed away recently. I have an interview with Brenda Milner. I have thoughts from Ray Kesner, Jeff Taube, Gary Lynch, and Gyuri Buzsaki. I would like to hear from folks as to whether doing this is a good idea or not.
I got annoyed with making ∞ versions of the same resume, so now I use a Python script, a yaml config and a mustache-templated tex file to generate everything. Always up to date!

I just changed instances. Hello #aus.social! Glad to be here!

Here's an #introduction.

I'm a #prof in computational #cogsci at #UniMelb. I use behavioural experiments and mathematical models to study how people #learn and #reason. I think a lot about how we make #inferences from sparse #data, how we #share information with each other, and how this affects our #cultural and #informational #systems.

I am a #parent to two amazing kids, love #teaching #rstats to psychology undergraduates, am an #American living in #Australia (citizen of both) in the beautiful #Dandenongs, and am #trans (see post below). I try not to take myself too seriously, especially on social media.

https://perfors.net/blog/im-transgender-thats-okay/

I'm transgender, and that's okay | Andrew Perfors

This is the post where I come out as transgender and share various thoughts I have about that

Andrew Perfors

you probably aren't boosting enough! here's a rough heuristic for my new fedis: boost with ~the same frequency you would do a "like" on Twitter. (and then favorite with wild abandon because it is just spreading appreciation mostly)

boosting is more about cross-pollinating posts between instances and accounts than it is about representing who you are as a person (though obvs you wouldn't boost stuff that is uh antithetical to you as a person).

at least two directions of effect:

  • boosting out from your instance: this is particularly important to boost posts from small accounts, as otherwise their posts don't show up for anyone outside your instance.
  • boosting into your instance: taking stuff you see from ppl you follow on other servers and making that show up on your local feed, also particularly important for smaller accounts so they can find ppl (and also is what makes the local feed fun!

there are some subtleties like asking to boost some sensitive posts, but for things marked public it's usually fine!

#Meta #Boosts