https://open.spotify.com/playlist/02nFLie97RG6qc0ryEFpCY?si=7fb9efc402474a13
| Web | https://petterhol.me |
| Web | https://petterhol.me |
It's been a while since the last blog post, but here is a new one. Maybe the most ambitious ever 😄
We feel that we understand things when certain patterns of explanations are in place. If reality doesn't follow those patterns, our understanding suffers. On to the third such eureka fallacy I've blogged about: explanation by optimization.
https://petterhol.me/2025/12/10/the-eureka-fallacy-of-optimization/
Time for another mixtape to get you into that wintery mood. 🎶 A cassette-friendly 90 min, of course.
https://open.spotify.com/playlist/15sfyIxRV5S67ZJFvxwZKb?si=23264f8c58804014
The main theme is how entwined we are with the technology that we use to study ourselves—how ready we are to accept a replica of ourselves and our environment as a token of scientific insight.
Another theme is the revolutionary change LLM chatbots brought about. The shift from the big-data era of AI as super-human predictors to AI as human simulacra. I.e., from a mainstream science viewpoint, a change to a methodologically more familiar ground.
New paper in NHB 📄🚨
We ran extensive experiments to show that making the rules of some canonical economic games looser makes people more cooperative
Jia et al. experimentally show that when individuals can tailor their actions to each neighbour—a freedom termed social networking agency—they display higher levels of cooperation, trust and fairness in economic games.
A bit early, but who could wait? A C90 mixtape for the best of seasons. 🎶
https://open.spotify.com/playlist/1rSUHMyticZVygVsIEGsdM?si=71bb8c9b303741b2
Human decision-making belongs to the foundation of our society and civilization, but we are on the verge of a future where much of it will be delegated to artificial intelligence. The arrival of Large Language Models (LLMs) has transformed the nature and scope of AI-supported decision-making; however, the process by which they learn to make decisions, compared to humans, remains poorly understood. In this study, we examined the decision-making behavior of five leading LLMs across three core dimensions of real-world decision-making: uncertainty, risk, and set-shifting. Using three well-established experimental psychology tasks designed to probe these dimensions, we benchmarked LLMs against 360 newly recruited human participants. Across all tasks, LLMs often outperformed humans, approaching near-optimal performance. Moreover, the processes underlying their decisions diverged fundamentally from those of humans. On the one hand, our finding demonstrates the ability of LLMs to manage uncertainty, calibrate risk, and adapt to changes. On the other hand, this disparity highlights the risks of relying on them as substitutes for human judgment, calling for further inquiry.
The Nordic trick—without two-digit temperatures (C) in the forecast, let's define summer as a state of mind. Which, of course, needs a mixtape: 🎶
https://open.spotify.com/playlist/6CgTIoJhQTkHaFiqFZt1vL?si=9d3dfbdd29fd42ae
New blog post! 📯⭐
About how our love for symmetry can stop us from seeing the truth.