❗️NEW PAPER❗️ How do we learn the statistical structure of the world? We tackled this big question by borrowing the blocking paradigm from reinforcement learning.
Led by Ilayda Nazli, w/ Christoph Huber-Huber and Floris de Lange
@DondersInst @cognition @cogsci @neuro @neuroscience

Full story: https://doi.org/10.1371/journal.pone.0306797

Highlights👇

Forward and backward blocking in statistical learning

Prediction errors have a prominent role in many forms of learning. For example, in reinforcement learning, agents learn by updating the association between states and outcomes as a function of the prediction error elicited by the event. One paradigm often used to study error-driven learning is blocking. In forward blocking, participants are first presented with stimulus A, followed by outcome X (A→X). In the second phase, A and B are presented together, followed by X (AB→X). Here, A→X blocks the formation of B→X, given that X is already fully predicted by A. In backward blocking, the order of phases is reversed. Here, the association between B and X that is formed during the first learning phase of AB→X is weakened when participants learn exclusively A→X in the second phase. The present study asked the question whether forward and backward blocking occur during visual statistical learning, i.e., the incidental learning of the statistical structure of the environment. In a series of studies, using both forward and backward blocking, we observed statistical learning of temporal associations among pairs of images. While we found no forward blocking, we observed backward blocking, thereby suggesting a retrospective revaluation process in statistical learning and supporting a functional similarity between statistical learning and reinforcement learning.

But there’s more, as shown by backward blocking: new info can update our understanding of earlier cues, suggesting a retrospective revaluation process in reinforcement learning. Computationally, see e.g. Kalman filter. Beautiful walkthrough here: https://doi.org/10.1371/journal.pcbi.1004567
A Unifying Probabilistic View of Associative Learning

Author Summary How do we learn about associations between events? The seminal Rescorla-Wagner model provided a simple yet powerful foundation for understanding associative learning. However, much subsequent research has uncovered fundamental limitations of the Rescorla-Wagner model. One response to these limitations has been to rethink associative learning from a normative statistical perspective: How would an ideal agent learn about associations? First, an agent should track its uncertainty using Bayesian principles. Second, an agent should learn about long-term (not just immediate) reward, using reinforcement learning principles. This article brings together these principles into a single framework and shows how they synergistically account for a number of complex learning phenomena.