"AI Safety" is so bizarre.

It's as if cigarette companies had themselves sounded the alarm about the dangers of smoking.

And then decided to make their core mission to make smoking safe, insisting on steering public discourse towards the dangers of smoking.

And to drive the point home they started releasing transparent progress reports showing how nothing they try seems to help, wondering out loud whether perhaps these dangers are actually *inherent* to smoking.

And then everyone just shrugs

To be clear, whether the "dangers" of AI are "real" or not is irrelevant to this observation.

Also, I am using "AI Safety" here not only in the traditional “will it kill us all" sense, but also the very practical “security problems that are present *today*" sense, *and* the also very practical "social” problems that are also already present *today* sense.

All three of these are treated very similarly: acknowledged without equivocation and also... ignored?

This behavior is certainly understandable from the perspective of AI companies. There’s some agreement that all this performative ”public worrying" is being used for a variety of ulterior motives (everything from indirectly trying to impress everyone with how "powerful" these things are, to trying to scare politicians into regulating in their favor). The thing that is more confusing is how everyone on the outside doesn't either see through this, or if they do believe it, actually act on it.
@tolmasky I think of the "AI Safety" talk as primarily a marketing push by these companies to make their products seem far more powerful than they really are. In cigarette terms it'd be putting nicotine ratings on the pack with a gendered subtext of "are you *MAN* enough" along with the health warnings.

@tolmasky
Watch this funny discussion ;)

https://youtu.be/oI-AoBcfo8I

Is AI an Existential Threat? LIVE with Grady Booch and Connor Leahy.

YouTube
@tolmasky It's marketing. People (especially young people) still pick up smoking, and keep smoking, even while being fully aware of the dangers. Tobacco companies know that and take full advantage. It's the same with AI.
@tolmasky @toxi they’ve learned from cigarettes that even if they don’t someone will talk about the dangers, and decided to co-opt it to control it, “safe” washing it.
@tolmasky @toxi but they do end up firing safety researchers would refuse to go along….

@tolmasky

Trying their damndest to make the narrative "we might make a superintelligence that will destroy humanity" so we all forget about "we gave the military a spicy autocomplete that they can try to somehow use to justify war crimes"

@tolmasky
They may not have warned the public, but they did poll doctors for cigarette recommendations.

What concerns me the most is whether this AI Safety is all performative so that we feel like they have it covered and we shouldn't worry.