OpenAI says its new model GPT-2 is too dangerous to release (2019)
https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html
OpenAI says its new model GPT-2 is too dangerous to release (2019)
https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html
They were more than right. They were correct in an intentional, precise manner. This is what OpenAI actually stated[0]:
> Synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns.
> ‘The public at large will need to become more sceptical of text they find online, just as the ”deep fakes” phenomenon calls for more scepticism about images.
It ended up just like that.
[0]: https://metro.co.uk/2019/02/15/elon-musks-openai-builds-arti...
Both crowds are right because two messages were spread. The researchers spread reasonable fears and concerns. The marketing charlatans like Altman oversold the scare as "Terminator in T-4 days" to imply greater capacity in those systems than was reasonably there.
The problem is the most publicly disseminated messaging around the topic were the fear mongering "it's god in a box" style messaging around it. Can't argue with the billions secured in funding heisted via pyramid scheme for the current GPU bonfire, but people are right to ridicule, while also right to point out warnings were reasonable. Both are true, it depends on which face of "OpenAI" we're talking about, researchers or marketing chuds?
Ultimately AGI isn't something anyone with serious skill/experience in the field expects of a transformer architecture, even if scaled to a planet sized system. It is an architecture which simply lacks the required inductive bias. Anyone who claims otherwise is a liar or a charlatan.
The fact that they knew they were shitting in the public well and did it anyways pisses me off. What colossally selfish assholes.
Hang them all.