OpenAI says its new model GPT-2 is too dangerous to release (2019)
https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html
OpenAI says its new model GPT-2 is too dangerous to release (2019)
https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html
Someone needs to make a compilation of all these classic OpenAI moments. Including hits like GPT-2 too dangerous, the 64x64 image model DALL-E too scary, "push the veil of ignorance back", AGI achieved internally, Q*/strawberry is able to solve math and is making OpenAI researchers panic, etc. etc.
I use Codex btw, and I really love it. But some of these companies have been so overhyping the capabilities of these models for years now that it's both funny to look back and tiresome to still keep hearing it.
Meanwhile I am at wits end after NONE OF Codex GPT-5.4 on Extra High, Claude Opus 4.6-1M on Max, Opus 4.6 on Max, and Gemini 3.1 Pro on High have been able to solve a very straightforward and basic UI bug I'm facing. To the point where, after wasting a day on this, I am now just going to go through the (single file) of code and just fix it myself.
Update: some 20 minutes later, I have fixed the bug. Despite not knowing this particular programming language or framework.
> a very straightforward and basic UI bug
Show us the code, or an obfuscated snippet. A common challenge with coding-agent related posts is that the described experiences have no associated context, and readers have no way of knowing whether it's the model, the task, the company or even the developer.
Nobody learns anything without context, including the poster.