Reddit will tighten verification to keep out human-like AI bots
Reddit will tighten verification to keep out human-like AI bots
Violence for the sake of violence is not the same as violence to stop violence. Everyone knows killing an abusive parent is morally grey whereas killing an innocent child is absolutely wrong.
I recently got a comment removed on the Fed because a mod misunderstood my comment. They thought that someone who maliciously harms the innocent is themselves innocent, therefore invalidating my ideal. My bet is they didn’t know what malice means, or they genuinely think that hurting people with no provocation doesn’t make you a bad person.
Hah, yeah. One could say it’s a form of digital nepotism.
“Only our bots get go work here!”
It can be both. Reddit has a history of fabricating conversations. The way they sell advertising implies a certain level of engagement from their user base which can lead to bots pushing products in the form of reviews or by mention.
I think it’s worth noting that Reddit, at one time, did have third party bot protection; however, it only protected their advertising. I can only imagine what the rest of their traffic looks like, but I would not be surprised if they were using bots of their own.
Like you said, they can make some money selling your information but they can also control the narrative how they choose.
No they won’t lol
If they didn’t do it already, it’s not gonna happen now. This is lip service to the shareholders
Is it? Reddit has been full of bots for years.
And if the r/CMV thing didn’t make it apparently, the recent wave of them weren’t “caught in the act” and only became a “problem” when the researchers announced their shitty experiment.
If realistic chat bots can fool the masses, why on earth would reddit get rid of them? It helps their metrics. “Look at how active our site is! Buy stock!”
I would like to remind you of two things. The first is that reddit user to have a mod tool called “BotDefense”. It’s shutdown in July of 2023 directly lead to a major uptick in Spam Bots.
The second is that part of the ad revenue is “impressions”. Impressions are just an account (bot or human) “viewing” the ad and they do not require a click-through. The platform hosting the ad still gets paid for those.
If you feed AI output back info AI, it makes the output worse, not better.
Yeh but feeding reddit user output into AI is part of the reason why AI is so confidently incorrect so often.
Bot activity brings “engagement” which generated more page views from actual humans. Literally the only downside of this announcement is the optics of having bots in the first place.
The bots were already indistinguishable from humans on reddit, do you really think that this recent scrutiny is going to lead to fewer bots? Or might it actually lead to better bots?
when the researchers announced their shitty experiment.
Shitty experiment? On the contrary, it was an amazing experiment.
Nsfw Still works in old.reddit as a non user.
Or so a friend tells me
What a destroy 90% of their traffic?!?
“That’s a bold move, Cotton. Let’s see if it pays off for ’em.”
AI bots.
Not the rest of them.
Closing the gate after the wolves have got in eaten all the sheep, had a nice rest, and then left. And then about 30 years past.
Sure maybe small trees are now growing in the now ungrazed pasture, but I guess late is better than never.