Researchers at the University of Zurich spammed the r/changemyview subreddit community with AI-generated comments in an effort to prove that LLMs could be used to persuade people to change their views. They did so without permission from Reddit, the moderators of the subreddit, or the members who were emotionally manipulated through their interaction with the comments.

A few points to follow.

https://www.reddit.com/r/changemyview/comments/1k8b2hj/comment/mp4vgcm/

#Reddit #UniversityOfZurich #Research #OnlineCommunities #Forums #CMGR

1. Online communities are never fair game for emotional exploitation. One of the absolute worst sins you can commit in an online community is to lie about something that is deeply harmful or meaningful to an individual.

Example: The researchers had an AI comment pretending to be a sexual assault victim.

2. Online communities are very open to research. But that begins with permission. I have approved and rejected many research requests in communities I have operated over the years.

3. Fake accounts are always fake accounts to me. Reddit was founded on bad principles, where Alexis Ohanian and Steve Huffman would create fake accounts, pretend to be real people, and allow others to form a relationship with those fake accounts so that they could profit.

I don't care whether it's AI or the site owner with 20 accounts, it's all dirty to me. When people make excuses for doing so, it is simply an effort to launder bad faith behavior. Don't fool around with people.

4. This is just a good overarching example of the biggest problem people have with LLMs: A lack of permission. It is pervasive in this space. A lack of permission or opt-in is disregarded in the interest of expediency and profit.

Reddit, the company, has engaged in efforts like this by signing deals with AI-focused companies to train on Reddit data - with no participation from the community members who contributed that data.

5. Bad AI-related things are absolutely happening in many online communities. We should do what we can to limit it, but don't let it eat you up. Ultimately, the fault of these things rests with bad actors - not on the community operator doing their best. When you see it, do what you can. I think the mods of r/changemyview deserve credit here.

6. Manipulation in online communities has been a thing since the start. Whether it's government actors trying to sway public opinion, autocrats targeting the opposition, or Scott Adams pretending to be a fan of Scott Adams, it's been going on.

The only thing that AI changes is the believability, quantity, and speed at which this can happen. All of which, it has impacted dramatically.

We're living through what I regard as a fairly dark time for online communities, but we have to do our best and focus on our goals and our members. We'll be alright. In many ways, we're the antidote.
Thanks to @chrispian for flagging this to me initially.

Reddit is now making "formal legal demands" against the researchers. Regardless of how clean Reddit's own hands are, that's probably a good thing.

https://www.404media.co/reddit-issuing-formal-legal-demands-against-researchers-who-conducted-secret-ai-experiment-on-users/

Reddit Issuing 'Formal Legal Demands' Against Researchers Who Conducted Secret AI Experiment on Users

Reddit called it an "improper and highly unethical experiment" and said it did not know it was happening.

404 Media
Another thing that I find odd about this research is that it's a fairly obvious outcome. If humans can be persuasive, therefore AI that sounds like humans can be persuasive. I realize the point of research like this is to quantify, to have data to point to, but the cost (harm to people) in exchange for the reward (confirming a predictable outcome) is relatively low here.
One other thing I should have mentioned, from the article: "The University of Zurich told 404 Media that the experiment results will not be published and said the university is investigating how the research was conducted."