@dataKnightmare https://openai.com/blog/our-approach-to-alignment-research ["Unaligned AGI could pose substantial risks to humanity" credo sia sufficiente, #sipario]
Our approach to alignment research

We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems.