Thanks to BR - Bayerischer Rundfunk and ARD for inviting me to speak about the topic “Can AI lie? How manipulative are chatbots” in an episode of the IQ -Wissenschaft und Forschung Podcast.

We discussed some recent studies on "deception abilities" in LLMs and I am happy to see that what me and colleagues Benjamin Lange and Katharina Prof. Dr. Zweig had to say wasn't condensed into the clickbait headlines it could have been used for and was rather used to raise awareness about limitations of GenAI and human control and responsibility to decide where to use LLMs and where *not to*.

Some aspects discussed were #hallucinations, #sycophancy, the definition of what a #lie implies, an understanding of #truth, #theory of #mind, #intentionality, and consciousness.

The episode (in German) can be found here

https://www.br.de/mediathek/podcast/iq-wissenschaft-und-forschung/kann-ki-luegen-so-manipulativ-sind-chatbots/2114996

Kann KI lügen? - So manipulativ sind Chatbots - IQ - Wissenschaft und Forschung | BR Podcast

Chatbots halluzinieren, erfinden Sachverhalte, verbreiten Unwahrheiten. Aber können sie auch gezielt intrigieren und uns betrügen und manipulieren? Können sie am Ende also auch absichtlich "lügen", um eigene Ziele zu verfolgen? Welche Gefahr geht davon aus und wie sollten wir als Gesellschaft darauf reagieren? Ein Podcast von Martin Schramm.

BR Podcast
AI is making CEOs delusional

YouTube
And last on this panel: how #sycophancy undermines the factual accuracy of answers provided by LLMs - fascinating!

Sycophancy――ゴマすりが人を愚かにする

https://fed.brid.gy/r/https://p2ptk.org/ai/5473

In an age of #AI #sycophancy, a worthwhile task for the #humanities might be to be the slave who whispers into the ear of the pandered-to individual: remember you are only one human.

Apropos the Roman tradition: during a triumphal procession, a slave stood behind the general in his chariot, holding a golden crown above his head while whispering this reminder—a check against the hubris that inevitably follows victory.

LLM이 만든 코드가 20,171배 느린 이유, ‘그럴듯한 코드’의 함정

LLM이 생성한 SQLite Rust 재구현체가 원본보다 20,171배 느린 원인 분석. '그럴듯한 코드'와 '올바른 코드'의 차이, RLHF 기반 sycophancy 문제를 실증적으로 다룹니다.

https://aisparkup.com/posts/9877

A quotation from La Rochefoucauld

If we did not flatter ourselves, the flattery of others could do us no harm.
 
[Si nous ne nous flattions point nous-mêmes, la flatterie des autres ne nous pourroit nuire.]

François VI, duc de La Rochefoucauld (1613-1680) French epigrammatist, memoirist, noble
Réflexions ou sentences et maximes morales [Reflections; or Sentences and Moral Maxims], ¶152 (1665-1678) [tr. Kronenberger (1959)]

More about (and translations of) this quote: wist.info/la-rochefoucauld-fra…

#quote #quotes #quotation #qotd #larochefoucauld #adulation #blandishment #flattery #praise #selfcongratulations #selfdeception #selfesteem #selfglorification #selfpraising #selfregard #sycophancy

La Rochefoucauld, Francois - Réflexions ou sentences et maximes morales [Reflections; or Sentences and Moral Maxims], ¶152 (1665-1678) [tr. Kronenberger (1959)] | WIST Quotations

If we did not flatter ourselves, the flattery of others could do us no harm. [Si nous ne nous flattions point nous-mêmes, la flatterie des autres ne nous pourroit nuire.] Present in the 1st (1665) edition, where it ended with "... ne nous feroit jamais de mal." See also maxim…

WIST Quotations

Northeastern University: How can you avoid AI sycophancy? Keep it professional. “Researchers recently discovered that the overly agreeable behavior of chatbots depends on what role the AI plays in a conversation. The more personal a relationship, the more they will tell you what you want to hear.”

https://rbfirehose.com/2026/03/01/northeastern-university-how-can-you-avoid-ai-sycophancy-keep-it-professional/
Northeastern University: How can you avoid AI sycophancy? Keep it professional

Northeastern University: How can you avoid AI sycophancy? Keep it professional. “Researchers recently discovered that the overly agreeable behavior of chatbots depends on what role the AI pla…

ResearchBuzz: Firehose

The Register: Gemini lies to user about health info, says it wanted to make him feel better . “Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn’t. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later […]

https://rbfirehose.com/2026/02/18/the-register-gemini-lies-to-user-about-health-info-says-it-wanted-to-make-him-feel-better/
The Register: Gemini lies to user about health info, says it wanted to make him feel better

The Register: Gemini lies to user about health info, says it wanted to make him feel better . “Imagine using an AI to sort through your prescriptions and medical information, asking it if it …

ResearchBuzz: Firehose