OMG they used a technical term for like a brief second.
[…] Modern AI chatbots built on large language models – advanced AI systems – are trained on enormous datasets to predict word sequences: it’s a sophisticated system of pattern matching. Yet even knowing this, when something non-human uses human language to communicate with us, our deeply ingrained response is to view it – and to feel it – as human. This cognitive dissonance may be harder for some people to carry than others.
Honestly, time the media should be more responsible. Instead of adopting marketing language like “intelligence,” call it what it is. Why pretend you need to treat readers like children? Why clickbait with terms like “AI” and “ChatGPT”? The correct technical term is pattern matching. It exists, works, and harms no reader.
Beyond the addictive design patterns deployed by #OpenAI, lazy journalism distances readers from facts through fuzzy analogies and steers them toward specific commercial implementations like #ChatGPT, preventing them from discovering tools that might be better, enough or safer. The same problem has played out for decades with Windows coverage: endlessly centring one proprietary product while millions of people could have been using #Linux-based desktops with no difficulty whatsoever.
Responsible media optimises for readers, not search engines. Drop “AI.” Use #LLM, pattern matching, stochastic text prediction, anything that makes readers want to understand what’s actually being shoved in their faces, rather than passively accepting predatory marketing dressed up as revolutionary technology.
@JulianOliver