New AI ethics scandal brewing... turns out a team at University of Zurich had dozens of undisclosed AI bot accounts debating with people on /r/ChangeMyView from November 2024 to March 2025 https://simonwillison.net/2025/Apr/26/unauthorized-experiment-on-cmv/
Unauthorized Experiment on CMV Involving AI-generated Comments

[r/changemyview](https://www.reddit.com/r/changemyview/) is a popular (top 1%) well moderated subreddit with an extremely well developed [set of rules](https://www.reddit.com/r/changemyview/wiki/rules/) designed to encourage productive, meaningful debate between participants. The moderators there just found …

Simon Willison’s Weblog
@simon Obviously, I think researchers should respect forum guidelines and I’m no zealot for AI. BUT I’m not sure I understand the reaction some are having that humans may have been persuaded by these chatbots and that would be terrible. Isn’t having an open mind the whole point of that forum? If someone presents a compelling argument that changes your mind, does it matter if they were human or AI? Books aren’t people, either, but they can expose new ideas and change minds, too.
@DavidAnson if someone gets their mind changed by a heartfelt personal story that was made up by an AI that invents family members that don't even exist I think it's abhorrent

@simon I think I understand the visceral reaction, but is this really any different than watching a movie? What about a “based on a true story” movie? Is it wrong to be sad about Jack and Rose on the Titanic because those two exact people didn’t exist?

I don’t think it’s right or ethical to manipulate people (whether human or AI), but in a forum where people went specifically to be exposed to new ideas and were under no obligation to change their own? Now they’re mad they agreed with an idea??

@simon If a chatbot helps someone decide that racism is wrong, is that “abhorrent”? Not in my mind.

And if the opposite chatbot leads someone else to embrace racism, that’s on them, not the chatbot.

We all need to be resilient to evil ideas - from any source.

@DavidAnson "I don’t think it’s right or ethical to manipulate people"

That's exactly how this feels to me: automated manipulation of people via technology, under the guys of "research"

I mean, here's a bot lying about being "a public defender for 3 years" - rhe dishonesty here is spectacular https://www.reddit.com/r/changemyview/comments/1j6x38t/comment/mgscvib/?context=3

@simon Right, I agree the experiment has ethical issues. But I don’t really understand the objection from people in the forum who were exposed to new ideas and seem upset they came from an AI vs. a human.
@simon @DavidAnson Not to contradict you that lying is unacceptable; the bot also makes a bad argument from personal experience. It wouldn't have mattered even if it were true.
@DavidAnson Whether AI or human wrote the fiction is separate from denying human subjects informed consent. Even so i recall many felt upset on learning Jack and Rose were fictional characters inserted into a true story. At least at a cinema, your payment, voluntarily entering into transaction with the cinema, 
implies consent. A strategy for ethical experiments do at least offer informed consent, though hide the true nature of the experiment from subjects until afterward.
@DavidAnson @simon When you're watching a movie, *you know it's fiction*. These were bots falsely labeled as people -- that's the problem in a nutshell.
@rst @simon Was there *really* a boy who cried wolf, or was that fable fabricated? It’s a useful learning tool either way.