"AI Safety" is so bizarre.

It's as if cigarette companies had themselves sounded the alarm about the dangers of smoking.

And then decided to make their core mission to make smoking safe, insisting on steering public discourse towards the dangers of smoking.

And to drive the point home they started releasing transparent progress reports showing how nothing they try seems to help, wondering out loud whether perhaps these dangers are actually *inherent* to smoking.

And then everyone just shrugs

To be clear, whether the "dangers" of AI are "real" or not is irrelevant to this observation.

Also, I am using "AI Safety" here not only in the traditional “will it kill us all" sense, but also the very practical “security problems that are present *today*" sense, *and* the also very practical "social” problems that are also already present *today* sense.

All three of these are treated very similarly: acknowledged without equivocation and also... ignored?

This behavior is certainly understandable from the perspective of AI companies. There’s some agreement that all this performative ”public worrying" is being used for a variety of ulterior motives (everything from indirectly trying to impress everyone with how "powerful" these things are, to trying to scare politicians into regulating in their favor). The thing that is more confusing is how everyone on the outside doesn't either see through this, or if they do believe it, actually act on it.