My alignment is bigger than yours.
https://openai.com/blog/introducing-superalignment
Introducing Superalignment

We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we’ve secured to date to this effort. We’re looking for excellent ML researchers and engineers to join us.

@mmitchell_ai slapping the word super onto an existing buzz/hype term really is the laziest possible effort to intellectually define an area of work

@natematias @mmitchell_ai also, even if you grant all the various questionable premises, the idea that this issue is one you can just “solve” is… spectacular.

It’s like Hammurabi inscribing the code and then telling his advisers “great, bad action is solved, let’s move on to other problems”. (3750 years of legal systems ensue)

@luis_in_brief @natematias @mmitchell_ai gross. And that they frame such things explicitly as if it was a “problem” (I.e. solvable) is insidious and at the same time, so typical. It’s a symptom of a way of thinking that is incapable of paying back the ethical debts owed to this planet and it’s societies.

A classic case of #problemism. I wrote about that here https://doi.org/10.7551/mitpress/14668.003.0009

Problemism: The Insolvency of Computational Thinking

MIT Press