A nightmare scenario for generative AI: images of child sex abuse. Disturbing, fast-multiplying, hard to trace and a major hindrance to real-world investigations of heinous crimes:

https://www.washingtonpost.com/technology/2023/06/19/artificial-intelligence-child-sex-abuse-images/

AI-generated child sex images spawn new nightmare for the web

Investigators say AI-generated child sexual abuse images are simple to create, difficult to track and take time away from finding victims of real-world abuse.

The Washington Post

@drewharwell These harms were foreseeable. People were fired for pointing out this technology was dangerous.

OpenAI pretends their hands are clean, and they couldn't have anticipated this use. But they could have anticipated it; they just decided that releasing their toy into the wild was more important than actually being cautious with safety.

Also foreseeable: they will proceed with the same level of recklessness going forward unless they face real accountability.

@drewharwell
Calling it now: someone will post Biden-related CSAM on Twitter a month before the election, it will trend and not be removed for at least 12 hours and Elon will declare it real and re-tweet every fascist nut job calling for Biden to be jailed.

Bonus call: there will be massive telltale signs it's fake but the NYT front page will still be: "Leaked Biden CSAM. Was Q Right? We Asked Former General Michael Flynn."

@Beeks @drewharwell
I imagined a different - much more tame - October Surprise: https://www.superversive.co/blog/machine-augmented-humanity
Machine Augmented Humanity — Superversive

One definition of product strategy is a clear policy articulating what you won’t permit, tolerate, or condone. In that case, Product teams with a sense of responsibility and accountability who employ data governance may inadvertently put themselves in a box. The leadership box says, “we’re not doi

Superversive