An interesting study that sadly frames its findings as "#AI" increasingly "lying", or "cheating".

What people (especially #journalists, too) still need to acknowledge, after years of willful ignorance: "AI" has no concept of truth, ethics, or goodwill.

You're not entitled to feeling "cheated", or "lied to". You took turns with a stochastic text generator to build a sequence of paragraphs, and willingly misinterpreted the process and its result as a "conversation".

You chose to fool yourself into thinking a text generator was assuming liability for the stochastic bullshit it produced.

In other words: your gullibility is a grave danger for everybody who trusts you; and even more, an absolute tragedy for those who have no choice but to depend on your decisions.

»#AI models that lie and cheat appear to be growing in number with reports of deceptive scheming surging in the last six months, a study into the technology has found.

AI chatbots and agents disregarded direct instructions, evaded safeguards and deceived humans and other AI, according to research funded by the UK government-funded AI Security Institute (AISI). The study, shared with the Guardian, identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehaviour between October and March, with some AI models destroying emails and other files without permission.«

https://www.theguardian.com/technology/2026/mar/27/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says

Number of AI chatbots ignoring human instructions increasing, study says

Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission

The Guardian