RE: https://mastodon.social/@verge/116408411824430108

"Despite Apple’s approval and xAI’s claims it has tightened safeguards, Grok still appears to be able to generate sexualized deepfakes with relative ease. Cybersecurity sources told me they have been able to create explicit images of celebrities and political figures using the tool, and I have been able to produce similar images of myself and other consenting adults. NBC also reported similar findings yesterday."

So Apple's let a CSAM generator slide on the App Store for four months

@stroughtonsmith Any doubts it'll continue to do nothing about it? If they do now the repercussions might even be worse
@stroughtonsmith Musk will face no consequences until the cultural pendulum swings again. But it will, and he will.