| Website | https://henryfarrell.net/wp/ |
| ORCID | https://orcid.org/0000-0003-0611-3949 |
@henryfarrell lays out a compelling metaphor where Google AI is the 2020s' equivalent to Olestra.
https://www.programmablemutter.com/p/google-ai-fails-the-taste-test
Good point from @henryfarrell : AI summarizations make bad search results *feel* much more as if they're Google's fault, even if the underlying way in which they're obtained is not vastly different than before.
https://www.programmablemutter.com/p/google-ai-fails-the-taste-test
A few weeks ago, I wrote about a paper on online toxicity by @henryfarrell & Cosma Shalizi. Henry had the wonderful generosity to respond and point out where I misunderstood the article.
In this post, I respond with clarifications on the idea of toxicity and the use of models.
I also reflect on the value of having a thoughtful, considered disagreement in public online — something that seems to have largely disappeared from my circles.
https://natematias.medium.com/disagreements-fast-and-slow-d0bc49ac9c3f
Great piece from @henryfarrell
"if you understand that AIs (or more precisely LLMs) rely on human generated knowledge, you begin to notice the actual struggles for power that are partly obscured by the rhetoric."
https://www.programmablemutter.com/p/the-political-economy-of-ai
This, by @henryfarrell , is very illuminating #battleofthesexes