"Harari suggests computers might also misinterpret ethics in similar ways. Even a perfect ethical rule is useless if the AI can simply edit the definition of who the rule applies to, just as the Nazis did. Hence there’s a high risk of AI misinterpreting the goals we give it.
But then Harari goes on to “de-technologize” LLMs in an explicit way, referring to them as “alien intelligences” for the second half of the book. He writes, “As for the term ‘AI,’… it is perhaps better to think of it as ‘alien intelligence.’ As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien” (229). In other words, Harari himself is rhetorically excluding AI from the “human” category, which is the exact move he warns against. This seems like a contradiction: Harari warns about the dangers of dehumanization while simultaneously de-technologizing AI by labeling it alien. By his own logic, this rhetorical move might be setting us up to dismiss or misunderstand these tools rather than engaging with them as extensions of human intelligence.
Calling something an alien intelligence invokes Hollywood’s world-ending plots and seems like a scare tactic. In reality, LLMs are prediction machines based on troves of human input. They’re literally predicting the most likely next word based on patterns in human content, so in some ways, these prediction machines are more “aggregate human” than any single human. They’re hardly alien intelligences with some nefarious design to delude us into some end it wants.
Instead, there are many complaints from artists, writers, and other content producers that AI models train on their content, and essentially replicate the training data in the AI’s outputs. If AI’s output is alien, how can it be considered human creative theft?"
https://idratherbewriting.com/blog/review-harari-nexus-scms-and-alien-intelligence
Review of Yuval Noah Harari’s “Nexus” — and why we don’t need self-correcting mechanisms for “alien intelligence”
I just finished Yuval Noah Harari’s Nexus: A Brief History of Information Networks from the Stone Age. The book provides a high-level analysis of information systems throughout history, with some warnings about the dangers of AI on today’s systems. It’s a remarkable book with many historical insights and interpretations that made history click for me. But the central idea of the book focuses on self-correcting mechanisms (SCMs) and how these SCMs are the linchpin of thriving democracies, so that’s what I’ll focus on in my review. The book also argues that AI is a form of alien intelligence that might incorrectly execute goals we don’t want it to follow.
