‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment.

https://slrpnk.net/post/21660993

‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. - SLRPNK

Lemmy

ChangeMyView seems like the sort of topic where AI posts can actually be appropriate. If the goal is to hear arguments for an opposing point of view, the AI is contributing more than a human would if in fact the AI can generate more convincing arguments.

It could, if it annoumced itself as such.

Instead it pretended to be a rape victim and offered “its own experience”.

That was definitely inappropriate, but it would still have been inappropriate if it was made up by a human rather than by an AI. I think it’s useful to distribute between bad things that happen to be done by an AI and things that are bad specifically because they are done by an AI. How would you feel about an AI that didn’t lie or deceive but also didn’t announce itself as an AI?

I think when posting on a forum/message board it’s assumed you’re talking to other people, so AI should always announce itself as such. That’s probably a pipe dream though.

If anyone wants to specifically get an AI perspective they can go to an AI directly. They might add useful context to people’s forum conversations, but there should be a prioritization of actual human experiences there.

I think when posting on a forum/message board it’s assumed you’re talking to other people

That would have been a good position to take in the early days of the Internet, it is a very naive assumption to make now. Even in the 2010s actors with a large amount of resources (state intelligence agencies, advertisers, etc) could hire human beings from low wage English speaking countries to generate fake content online.

LLMs have only made this cheaper, to the point where I assume that most of the commenters on political topics are likely bots.

For sure, thus why I said it’s a pipe dream. We can dream though, maybe we will figure out some kind of solution one day.

The research in the OP is a good first step in figuring out how to solve the problem.

That’s in addition to anti-bot measures. I’ve seen some sites that require you to solve a cryptographic hashing problem before accessing. It doesn’t slow a regular person down, but it does require anyone running a bot to provide a much larger amount of compute power to each bot which increases the cost to the operator.