This by @SashaMTL is vv. good, and connects some things I've been thinking about around possible regulatory principles and what "AI alignment" could mean (distinct from how the term is often used at present).
What if we only deploy AI systems that are demonstrably fit for purpose?
For one thing, it would require us to know what on earth the purpose is. But "alignment" looks very different when the standard is evaluation against specified criteria for a particular task.
https://www.wired.com/story/the-call-to-halt-dangerous-ai-research-ignores-a-simple-truth/