Your reminder that I've written a book on the business risks of generative AI. "The Intelligence Illusion"

https://illusion.baldurbjarnason.com/

Stuff I cover (not exhaustive):

- AGI is not happening any time soon.
- AGI and anthropomorphism will cripple your ability to think clearly about AI
- The AI industry has a long history of snake oil and fraud
- These models copy more than you think
- Hallucinations are still a thing and aren't going away.
- AI "reasoning" is quite broken
- Security is a shit show

The Intelligence Illusion (Second Edition): Why generative models are bad for business

Available in PDF and EPUB

@baldur I can't think of AI "reasoning" as anything more than the ability of humans to see patterns that aren't actually there.
The LLMentalist Effect: how chat-based Large Language Models rep…

The new era of tech seems to be built on superstitious behaviour

Out of the Software Crisis

@gregeganSF @baldur And it's not "accidental". Since Eliza chatbots have been trying to "pass the Turing Test" by tricking humans into falling for the con. Weizenbaum found people's tendency to anthropomorphize Eliza disturbing but that didn't stop other researchers from enthusiastically adopting deception by design.

The result: chatbots that are designed to fool humans.