ChatGPT’s user base shrank after OpenAI censored harmful responses
Data shows ChatGPT use decreased by nearly 10 percent from May to June.
ChatGPT’s user base shrank after OpenAI censored harmful responses
Data shows ChatGPT use decreased by nearly 10 percent from May to June.
TL;DR: "...tools like ChatGPT now have guardrails in place that limit chatbots from responding to prompts with problematic content like misinformation, harmful instructions, biased viewpoints, or hateful content."
@Enema_Cowboy @arstechnica yeah I was being /s in the post, I understand the words "problematic" and "harmful" as to their agenda, not to the individual user
though, using chatgpt at all is dangerous because it's drinking up chats into training data, and who knows what else behind the scenes
@arstechnica The guardrails are just annoying. A large portions of prompts you provide get boilerplate disclaimers or refusals now. Even with goofy prompts like "How do you defeat [silly monster] with [silly weapons]?"
I'm hoping that the edgy behavior from users and the overreaction from the company will lessen as people become more accustomed to this more sophisticated autocomplete engine.
@arstechnica
ChatGPT use shrank after schools let out, users discovered ChatGPT lies, employers banned its use, etc. as WaPost source article says.
Those *may* be factors in the usage drop and not just correlations.
The language of your Masto post implies a cause-and-effect relationship of solely harmful response blocks (though the article headline is not much better).
“ChatGPT’s user base shrank after OpenAI censored harmful responses”
—AND—
Everyone born in New England in 1776 died after Julius Cesar crossed the Rubicon.
“ChatGPT’s user base shrank after OpenAI censored harmful responses”
—AND—
Many puppies died after you wrote this headline.