How do you calculate surprisal for existing real-world texts? Current LLM often recognize them after some words and then surprisal flatlines at zero. #llm #surprisal #psycholinguistics

Parrots! 🦜

Parrots love playing tablet games
Research f. Northeastern U. delves deep into data on how parrots use touchscreen devices, w. help of bespoke gaming app
https://news.northeastern.edu/2024/03/20/parrots-playing-tablet-games/
https://news.ycombinator.com/item?id=39768604

Parrot Shows Off Tablet Talent
https://www.youtube.com/watch?v=cZSNhJcKFf4

Parrots learn to make video calls to chat with other parrots, then develop friendships, Northeastern University researchers say
https://news.northeastern.edu/2023/04/21/parrots-talking-video-calls/

#birds #intelligence #surprisal #parrots #cognition

Parrots love playing tablet games. That’s helping researchers understand them.

A new Northeastern study delves deep into how parrots use touchscreen devices, with the help of a bespoke gaming app.

Northeastern Global News
Conscious AI Is the Second-Scariest Kind

<span>A cutting-edge theory of mind suggests a new type of doomsday scenario.</span>

The Atlantic

Testing Predictions of Surprisal Theory in 11 Languages
https://arxiv.org/abs/2307.03667

A fundamental result in psycholinguistics: Less predictable words take longer to process
Theoretical explanation for this finding is Surprisal Theory
https://en.wikipedia.org/wiki/Prediction_in_language_comprehension#Surprisal_theory

Aside: surprisal (surprise) is a tenet of Friston's Free Energy Principle
https://mastodon.social/@persagen/110582825938232359

https://link.springer.com/content/pdf/10.1007/s10539-022-09864-z.pdf
Surprisal of x = log(1/p(x))
...

#SurprisalTheory #KarlFriston #FreeEnergyPrinciple #TheoriesOfConsciousness #surprisal

Testing the Predictions of Surprisal Theory in 11 Languages

A fundamental result in psycholinguistics is that less predictable words take a longer time to process. One theoretical explanation for this finding is Surprisal Theory (Hale, 2001; Levy, 2008), which quantifies a word's predictability as its surprisal, i.e. its negative log-probability given a context. While evidence supporting the predictions of Surprisal Theory have been replicated widely, most have focused on a very narrow slice of data: native English speakers reading English texts. Indeed, no comprehensive multilingual analysis exists. We address this gap in the current literature by investigating the relationship between surprisal and reading times in eleven different languages, distributed across five language families. Deriving estimates from language models trained on monolingual and multilingual corpora, we test three predictions associated with surprisal theory: (i) whether surprisal is predictive of reading times; (ii) whether expected surprisal, i.e. contextual entropy, is predictive of reading times; (iii) and whether the linking function between surprisal and reading times is linear. We find that all three predictions are borne out crosslinguistically. By focusing on a more diverse set of languages, we argue that these results offer the most robust link to-date between information theory and incremental language processing across languages.

arXiv.org

I recently posted a new preprint which explores the idea of #sampling #algorithms as a theory of human sentence #processing.
(w/ Morgan Sonderegger, Steve Piantadosi, & Tim O'Donnell)

Our observation is that while humans take longer to process more surprising items, most algorithms for sentence processing don't naturally have this property. Sampling algorithms do. We look at their empirical predictions: a superlinear relationship w/ #surprisal, and increasing variance.

https://psyarxiv.com/qjnpv/