OpenAI says its new model GPT-2 is too dangerous to release (2019)

https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html

When Is Technology Too Dangerous to Release to the Public?

If recent history is any indication, trying to suppress or control the proliferation of A.I. tools may be a losing battle.

Slate

Someone needs to make a compilation of all these classic OpenAI moments. Including hits like GPT-2 too dangerous, the 64x64 image model DALL-E too scary, "push the veil of ignorance back", AGI achieved internally, Q*/strawberry is able to solve math and is making OpenAI researchers panic, etc. etc.

I use Codex btw, and I really love it. But some of these companies have been so overhyping the capabilities of these models for years now that it's both funny to look back and tiresome to still keep hearing it.

Meanwhile I am at wits end after NONE OF Codex GPT-5.4 on Extra High, Claude Opus 4.6-1M on Max, Opus 4.6 on Max, and Gemini 3.1 Pro on High have been able to solve a very straightforward and basic UI bug I'm facing. To the point where, after wasting a day on this, I am now just going to go through the (single file) of code and just fix it myself.

Update: some 20 minutes later, I have fixed the bug. Despite not knowing this particular programming language or framework.

> I am now just going to go through the (single file) of code and just fix it myself.

That's front page news, in this era.

Thank you for the belly laugh.