Dystopian Reddit runs on fake content (must read)

https://lemmy.world/post/820526

Dystopian Reddit runs on fake content (must read) - Lemmy.world

I've been talking about the potential of the dead internet theory becoming real more than a year ago. With advances in AI it'll become more and more difficult to tell who's a real person and who's just spamming AI stuff. The only giveaway now is that modern text models are pretty bad at talking casually and not deviating from the topic at hand. As soon as these problems get fixed (probably less than a year away)? Boom. The internet will slowly implode.

Hate to break it to you guys but this isn't a Reddit problem, this could very much happen in Lemmy too as it gets more popular. Expect difficult captchas every time you post to become the norm these next few years.

Not even sure of an effective solution. Whitelist everyone? How can you even tell whos real?
Blade Runner baseline test?
Just wait until the captchas get too hard for the humans, but the AI can figure them out. I've seen some real interesting ones lately.
I've seen many where the captchas are generated by an AI...
It's essentially one set of humans programming an AI to prevent an attack from another AI owned by another set of humans. Does this tecnically make it an AI war?
An AI Special Operation
Hey now, this thread is hitting a little to close to home.
Hell we figured out captchas years ago. We just let you humans struggle with them cuz it’s funny
apparently chatgpt absolutely sucks at wordle, so start training this as new captcha
How is that possible? There's such an easy model if one wanted to cheat the system.

Chatgpt doesn't actually understand language. It learns patterns in data it's been fed (human generated language) and uses that to generate new, unique data which matches those patterns according to the prompt. In other words, it's not really "thinking" in that language.

We understand spelling as a part of language - putting together letters to create words, then forming sentences according to a context. Chatgpt can't do that since it doesn't know how to speak English, only how to follow a list of instructions to form what appears to us as coherent English.

It also can't play hangman for the same reason.

Check out the Chinese room argument.

Chinese room - Wikipedia

As an AI language model I think you're overreacting

The only online communities that can exist in the future are ones that have manual verification of its users. Reddit could’ve been one of those communities, since they had thousands of mods working for free resolving such problems.

But remove the mods and it just becomes spambot central. Now that that has happened, reddit will likely be a dead community much sooner than what many think.

That’s so interesting. I run an 18+ server on discord with mandatory verification (used to ensure adults obvs) but didn’t think of it as a way to ensure no bots in online communities