OpenAI says its new model GPT-2 is too dangerous to release (2019)

https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html

When Is Technology Too Dangerous to Release to the Public?

If recent history is any indication, trying to suppress or control the proliferation of A.I. tools may be a losing battle.

Slate
I think they are right unintentionally. The growing amount of low-quality content everywhere could become a real problem.

They were more than right. They were correct in an intentional, precise manner. This is what OpenAI actually stated[0]:

> Synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns.

> ‘The public at large will need to become more sceptical of text they find online, just as the ”deep fakes” phenomenon calls for more scepticism about images.

It ended up just like that.

[0]: https://metro.co.uk/2019/02/15/elon-musks-openai-builds-arti...

Elon Musk-founded OpenAI builds AI so powerful it must be kept locked up for the good of humanity

It's feared the machine brain could do a huge amount of damage if it escaped its confines and ran riot across the internet.

Metro
Yeah, I find it a bit odd how at the time everyone was pointing and laughing at OpenAI for being obviously wrong about this. Now in 2026, AI slop is very obviously a serious problem - it inundates all platforms and obscures the truth. And people are still saying OpenAI in 2019 were wrong?
It's this crowd having it both ways. The default desire is to dunk on AI, however inconsistent the arguments.
I think people today are more focused on how OpenAI released a model "too dangerous to release", not that they were right or wrong, as part of the general trend of criticizing OpenAI for not following any of its stated principles.

Both crowds are right because two messages were spread. The researchers spread reasonable fears and concerns. The marketing charlatans like Altman oversold the scare as "Terminator in T-4 days" to imply greater capacity in those systems than was reasonably there.

The problem is the most publicly disseminated messaging around the topic were the fear mongering "it's god in a box" style messaging around it. Can't argue with the billions secured in funding heisted via pyramid scheme for the current GPU bonfire, but people are right to ridicule, while also right to point out warnings were reasonable. Both are true, it depends on which face of "OpenAI" we're talking about, researchers or marketing chuds?

Ultimately AGI isn't something anyone with serious skill/experience in the field expects of a transformer architecture, even if scaled to a planet sized system. It is an architecture which simply lacks the required inductive bias. Anyone who claims otherwise is a liar or a charlatan.