Titus von der Malsburg ๐Ÿ“–๐Ÿ‘€๐Ÿ’ญ

@tmalsburg@scholar.social
932 Followers
888 Following
757 Posts

Linguist, cognitive scientist, tt prof at University of Stuttgart. I study language how we understand it one word at a time.

#eyetracking, large-scale crowd-sourced experiments, #Bayesian stats, computational cognitive modeling

I teach skills that are useful inside and outside the ivory tower.

Long-term #GNULinux user; devout member of the church of #Emacs; developer of scientific software; dreams in #Rstats; supporter of #FLOSS #LIBRE software

๐Ÿšด

Websitehttps://tmalsburg.github.io
ORCIDhttps://orcid.org/0000-0001-5925-5145
Publications as RSS feedhttps://tmalsburg.github.io/publications_malsburg.rss
Google Scholarhttps://scholar.google.com/citations?user=_vYYOE4AAAAJ&hl=de&oi=ao
An der Bushaltestelle heute morgen warten mehr Leute als sonst. Die meisten schauen in ihre Handys. Ein groรŸes Schild sagt, dass der Bus wegen einer Baustelle umgeleitet wird und diese Haltestelle nicht anfรคhrt. Hat aber keiner bemerkt. Ich mache die Leute darauf aufmerksam, aber sie schauen mich nur irritiert an. Allein eine alte Dame bedankt sich freundlich fรผr den Hinweis und macht sich auf den Weg zur Ersatzhaltestelle. Alle anderen schauen wieder in ihre Handys.
Not sure I understand your question. You can simply prompt a model with the first sentences of, e.g., the US constitution and see how per-word surprisal quickly goes to zero. This is even true in relatively old models like GPT-2. That's not really surprising, but it does raise some methodological issues when using surprisal as an explanatory variable for human language processing.
How do you calculate surprisal for existing real-world texts? Current LLM often recognize them after some words and then surprisal flatlines at zero. #llm #surprisal #psycholinguistics
Teaching evaluation for my intro stats course this summer: Close to perfect scores in all categories ๐Ÿ˜Š
After 20+ years of laptops as my main computer, Iโ€˜m back to a workstation. Really enjoying those 20 CPU cores :)
I'd like to sync my home directory on two computers (laptop for travel, desktop at the office). Is unison still the best solution for that? Anything that I need to pay particular attention to? Any pitfalls that I need to watch out for? Last time I used unison was 10+ years ago. #ubuntu #linux
My bank is replacing their so far excellent customer hotline with a chat bot. The result is exactly as infuriatingly bad as you might think. Complete disaster.

Fully-funded PhD position in experimental and/or computational psycholinguistics:

https://tmalsburg.github.io/job_ad_2025_phd.html

Application deadline is August 15.

Fully Funded PhD Position in Experimental and/or Computational Psycholinguistics

The internal combustion engine is basically a metal box with fire it it. It's almost ridiculously easy to see the many downsides of this technology. Nonetheless, it has become one of the biggest success stories in tech. Reason is an endless stream of small optimizations and clever workarounds. The story of LLM-based AI will be similar. LLM are a technology with many obvious flaws, but over time we will find workarounds and optimizations that make it useful and reliable.

Grok ist womรถglich darauf programmiert die Ansichten von Elon Musk zu berรผcksichtigen:

https://www.heise.de/news/Kontroverse-Themen-KI-Modell-Grok-konsultiert-offenbar-Elon-Musks-Aeusserungen-10483586.html

KI: Grok richtet Antworten offenbar daran aus, was Elon Musk gesagt hat

Die Verรถffentlichung des neuen KI-Modells von Elon Musks xAI erfolgte nicht ohne Schwierigkeiten. Nun gibt es Fragen, wie sehr sie auf ihn zugeschnitten wurde.

heise online