OpenAI says its new model GPT-2 is too dangerous to release (2019)

https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html

When Is Technology Too Dangerous to Release to the Public?

If recent history is any indication, trying to suppress or control the proliferation of A.I. tools may be a losing battle.

Slate
I think they are right unintentionally. The growing amount of low-quality content everywhere could become a real problem.

Now imagine all that low quality AI slop is being posted online and a new generation of AI will "learn" from it, output it's own version of AI slop, that will eventually end up online again for a new generation of AI to "learn" from.

Something, something, idiocracy comes to mind.

> Something, something, idiocracy comes to mind.

So, confirmation? They are catching up quickly!

The actuality is, anyone with pre-slop data still has their pre-slop data. And there are endless ways to get more value out of good data.

Bootstrapping better performance by using existing models to down select data for higher density/median quality, or leverage recognizable lower quality data to reinforce doing better. Models critiquing each other, so the baseline AI behavior increases, and in the process, they also create better training data. And a thousand more ways.

Managed intelligently, intelligence wants to compound.

The difference between human and AI idiocracy, is we don't delete our idiots. I am not suggesting we do that. But maybe we shouldn't elect them. Either way, that is one more very steep disadvantage for us.