Learning, sleep replay and consolidation of contextual fear memories: A neural network model journals.plos.org/ploscompbiol... #neuroskyence

Learning, sleep replay and con...
Learning, sleep replay and consolidation of contextual fear memories: A neural network model

Author summary How do we learn to fear certain environments? Why do some fear memories fade while others persist or even grow stronger over time? Scientists have long used laboratory experiments to study how animals associate danger with a particular context. These studies have helped identify brain regions involved in fear learning, including the amygdala, hippocampus, and cortex, and have inspired many computational models of how fear is acquired in the brain. However, most models focus only on what happens when fear is first learned, overlooking how these memories evolve in the following days and nights. In this work, we present a neural network model that captures how fear memories are strengthened or reshaped during sleep. It builds on earlier models by incorporating memory replay and synaptic homeostasis, two brain processes believed to support emotional memory consolidation. Our model identifies neural processes that help make fear memories persistent, suggests that sleep is necessary to maintain adaptive behaviour after threatening experiences, and proposes that sleep disruptions mediate the harmful impact of stress on emotional regulation. By extending amygdala-based models of fear learning to include post-learning processes, we aim to narrow the gap between these models and disorders linked with persistent fear, such as PTSD.

The Mind’s Eyes: Distinct Neural Correlates of Spatial and Object Imagery www.sciencedirect.com/science/arti... #neuroskyence

The Mind’s Eyes: Distinct Neur...
On consciousness in animals and in artificial intelligence journals.physiology.org/doi/abs/10.1... #neuroskyence #MLSky

On consciousness in animals an...
A neural signature of adaptive mentalization - Nature Neuroscience

Successful social interaction relies on predicting others’ behavior. Buergi et al. combined computational modeling and brain imaging to show how humans adjust to changing strategies, revealing a neural signature of adaptive belief updating.

Nature
Recurrent cortical networks encode natural sensory statistics via sequence filtering www.cell.com/neuron/fullt... #compneuro #neuroskyence

Recurrent cortical networks en...
Orbitofrontal cortex drives predictive filtering of sensory responses www.nature.com/articles/s41... #neuroskyence

Orbitofrontal cortex drives pr...
Orbitofrontal cortex drives predictive filtering of sensory responses - Nature Neuroscience

Top-down projections from the orbitofrontal cortex carry predictive signals that grow with sound experience and suppress the auditory cortex via inhibitory circuits, revealing a predictive mechanism for sensory habituation.

Nature
I'm still accepting applications! Very exciting opportunity to work on cognitive computational neuroscience of vision in NYC! #PsychJobs #NeuroJobs #neuroskyence #PsychSciSky

RE: https://bsky.app/profile/did:plc:tkvnioglsp5dw44mvxjwcmmz/post/3meca77l3ns2f
Delayed, Reduced and Redundant: Information Processing of Prediction Errors during Human Sleep www.jneurosci.org/content/earl... #neuroskyence

Delayed, Reduced and Redundant...
Delayed, Reduced and Redundant: Information Processing of Prediction Errors during Human Sleep

During sleep, the human brain transitions to a ‘sentinel processing mode’, enabling the continued processing of environmental stimuli despite the absence of consciousness. We employed advanced information-theoretic analyses, including mutual information (MI) and co-information (co-I), alongside event-related potential (ERP) and temporal generalization analyses (TGA), to characterize auditory prediction error processing across wakefulness and sleep. We hypothesized that a shared neural code would be present across sleep stages, with deeper sleep being associated with reduced information content and increased information redundancy. Twenty-nine participants (15 women) underwent an auditory ‘local-global’ oddball paradigm during wakefulness and an 8-hour sleep opportunity monitored via polysomnography. We focused on ‘local’ mismatch responses to a deviating fifth tone after four standards. ERP analyses showed that prediction error processing continued throughout all sleep stages (N1-N3, REM). Mutual information analyses revealed a substantial reduction in encoded prediction error information particularly during N3 and REM, although ERP amplitudes increased with deeper NREM sleep. We also observed delayed information encoding during sleep, and co-information analyses showed neural dynamics became increasingly redundant with increasing sleep depth. Temporal generalisation analyses revealed a largely shared neural code between N2 and N3 sleep, though it differed between wakefulness and sleep. We demonstrate how the neural code of the ‘sentinel processing mode’ changes from wake to light to deep sleep and REM, characterised by delayed processing, more redundant and less rich neural information in the human cortex as consciousness wanes. This altered stimulus processing reveals how neural information evolves with variations in consciousness across the night. Statement of Significance Even during sleep, the human brain remains responsive to its surroundings. Using an auditory stimulation paradigm, the study reveals how the neural code underlying this 'sentinel processing mode' changes from wakefulness to sleep and with increasing sleep depth. Using computational methods to precisely characterise information processing in the brain, we show that as sleep deepens, the brain encodes less information at increasing redundancy. These findings provide new insights that may help understand why we lose consciousness when falling asleep.

Journal of Neuroscience
A generalized Bayesian framework for maximizing information gain and model selection dx.plos.org/10.1371/jour... #stats #neuroskyence

A generalized Bayesian framewo...
A generalized Bayesian framework for maximizing information gain and model selection

Author summary In this work, we present a generalized Bayesian framework for designing informative experiments and selecting suitable models in biological systems. In simple terms, our method identifies which experiments or measurements are most useful in improving parameter estimates and model predictions. The key idea is based on a new information measure called β -information gain, which uses the Bhattacharyya coefficient to quantify how much knowledge is gained from an experiment. We show that maximizing this gain is equivalent to reducing uncertainty and improving model confidence. Through case studies on the Hes1 transcription model and HIV-1 2-LTR dynamics, we demonstrate how this approach efficiently chooses the best experiments and sampling schedules. Our method also provides a novel and interpretable tool for model selection. Overall, this study provides a practical and computationally simple way to perform optimal experiment design in data-driven modeling in systems biology.

🚨Job alert! I'm recruiting a postdoc! If you want to study the time course of task-driven visual perception, please reach out! #neuroskyence #VisionScience #CogSci barnard.wd1.myworkdayjobs.com/en-US/Facult...

Postdoctoral Research Fellow, ...
Postdoctoral Research Fellow, Cognitive/Computational Neuroscience

If you are a current Barnard College employee, please use the internal career site to apply for this position. Job: Postdoctoral Research Fellow, Cognitive/Computational Neuroscience The Barnard Visual Cognition Lab in the Department of Psychology at Barnard College is seeking applicants for one postdoctoral research fellow for the 2026–2027 academic year. This is a one-year, full-time position with a possibility of renewal contingent on funding and performance. The fellowship is designed for an emerging scholar who wants deep, hands-on experience working at the intersections of human cognitive science and artificial intelligence to understand the time-course of naturalistic scene understanding. We are especially excited about applicants who enjoy: (1) working with large, real-world datasets, (2) combining visual processing and semantic modeling, and (3) translating theory questions about perception and meaning into concrete, testable analyses. Proposed Start Date: 8/1/2026 (flexible within Summer/Fall 2026). Job Description: The Visual Cognition Lab studies how humans perceive, interpret, and navigate real-world scenes, linking visual information, semantic inference, and task demands to behavior and brain activity. Ongoing projects include: Large-scale naturalistic image and video datasets (including curated “visual experience” style datasets; indoor/outdoor scenes, places, objects, and actions). Multimodal scene descriptions and embeddings: human and LLM-generated descriptions across multiple task prompts (e.g., affordances, navigation, aesthetics, danger, multisensory inferences), and embedding-based targets (e.g., MPNet/Transformer sentence encoders). Model–brain alignment using encoding/decoding with EEG time courses and/or fMRI (e.g., ridge regression, variance partitioning, RSA, representational geometry, temporal generalization). Computational measures of visual information (e.g., image statistics/compressibility proxies, deep network features, object/scene representations). Position Summary The postdoctoral fellow will lead and co-lead projects that combine computational modeling, machine learning, and EEG to answer questions about scene understanding and neural representation. The fellow will work closely with the PI, collaborate with students, and contribute to manuscripts, conference submissions, and grant-related research aims. This is a 35-hour/week position with flexible scheduling; on-campus presence is encouraged for mentorship and collaboration, with hybrid arrangements possible depending on project needs. Barnard provides an intellectually vibrant environment with close ties to Columbia University and the broader NYC cognitive science community. Responsibilities Include: Research & Analysis Develop and maintain Python-based pipelines for large-scale data processing (images/video, text descriptions, embeddings, metadata). Train and evaluate models for representation learning and prediction (e.g., PyTorch, Transformers, CNN backbones, contrastive/embedding objectives). Perform rigorous statistical modeling of behavior and/or neural data. Conduct model-to-brain analyses for EEG (e.g., MNE-Python workflows; feature extraction; time-resolved encoding; representational similarity; temporal dynamics). Open, Reproducible Science Write clean, documented code; use version control (Git); build reproducible experiments. Prepare datasets and analysis outputs for publication and sharing (data dictionaries, provenance, basic QA/QC). Mentorship & Lab Citizenship Provide light-to-moderate mentorship to undergraduate/RA contributors (code review, research hygiene, analysis planning). Participate in lab meetings, research discussions, and departmental intellectual life. Scholarly Output Lead/co-lead manuscripts and conference submissions (e.g., VSS/CCN), including figure generation and method writeups. Skills, Qualifications & Requirements: Required Qualifications PhD by start date in Psychology, Neuroscience, Cognitive Science, Computer Science, Statistics, or a related field. Strong scientific computing skills in Python (NumPy/Pandas, reproducible pipelines). Demonstrated ability to run and interpret statistical analyses with appropriate validation (cross-validation, uncertainty, robustness checks). Evidence of research productivity (publications/preprints, conference papers, or equivalent). Commitment to inclusive mentorship and working respectfully in a diverse academic community. Preferred (not all required) Experience with machine learning / deep learning (PyTorch; model training; GPU workflows). Experience with Transformers / text embeddings / multimodal modeling (e.g., Hugging Face ecosystem). Experience with EEG (MNE-Python) and encoding/decoding frameworks. Comfort working with large datasets. Strong data visualization and figure generation skills for publication. Application Requirements Only complete applications submitted via Workday will be considered. Applicants are required to upload the following documents: Curriculum Vitae (maximum file size: 5 MB) A single PDF file (maximum file size: 30 MB) containing: Cover letter (1–2 pages) describing research interests, relevant technical experience (ML/statistics/neuro methods), and what you’d want to build/learn in this fellowship 1–2 representative artifacts (optional but encouraged): a preprint/paper, GitHub repo, or a short code sample. Finalists will be asked to identify three references (at least one from a primary research supervisor/PI) who will be contacted at a later stage. Priority will be given to applications received before March 15, 2026. Interviews may early February, 2026. Applications will be considered until the position is filled. Please contact Dr. Michelle R. Greene at [email protected] with questions regarding the Postdoctoral Fellowship. Salary: $68,000 - $72,000 annually Barnard College is an Equal Opportunity Employer. Barnard does not discriminate due to race, color, creed, religion, sex, sexual orientation, gender and/or gender identity or expression, marital or parental status, national origin, ethnicity, citizenship status, veteran or military status, age, disability, or any other legally protected basis. Qualified candidates of all backgrounds are encouraged to apply for vacant positions at all levels. The salary of the finalist selected for this role will be set based on a variety of factors, including but not limited to departmental budgets, qualifications, experience, education, licenses, specialty, and training. The above hiring range represents the College's good faith and reasonable estimate of the range of possible compensation at the time of posting. Company: Barnard College Time Type: Full time Barnard is an intellectually stimulating, diverse college community of approximately 800 full- and part-time faculty and staff dedicated to providing an extraordinary educational environment for our 2,600 students. Whether you are a current or potential Barnard employee, the Office of Human Resources is here to provide you with clear and accurate information and the best possible service. We hope that you find the site useful and will feel free to email any of the HR staff directly with questions or suggestions. We look forward to working with you.