“CSAM generated by AI is still CSAM,” DOJ says after rare arrest

https://lemmy.world/post/15665099

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest - Lemmy.World

Then we should be able to charge AI (the developers moreso) for the same disgusting crime, and shut AI down.
Camera-makers, too. And people who make pencils. Lock the whole lot up, the sickos.

Camera makers and pencil makers (and the users of those devices) aren’t making massive server farms that spy on every drop of information they can get ahold of.

If AI has the means to generate inappropriate material, then that means the developers have allowed it to train from inappropriate material.

Now when that’s the case, well where did the devs get the training data?.. 🤔

If AI has the means to generate inappropriate material, then that means the developers have allowed it to train from inappropriate material.

That's not how generative AI works. It's capable of creating images that include novel elements that weren't in the training set.

Go ahead and ask one to generate a bonkers image description that doesn't exist in its training data and there's a good chance it'll be able to make one for you. The classic example is an "avocado chair", which an early image generator was able to produce many plausible images of despite only having been trained on images of avocados and chairs. It understood the two general concepts and was able to figure out how to meld them into a common depiction.

Yes, I’ve tried similar silly things. I’ve asked AI to render an image of Mr. Bean hugging Pennywise the clown. And it delivered, something randomly silly looking, but still not far off base.

But when it comes to inappropriate material, well the AI shouldn’t be able to generate any such thing in the first place, unless the developers have allowed it to train from inappropriate sources…

The trainers didn't train the image generator on images of Mr. Bean hugging Pennywise, and yet it's able to generate images of Mr. Bean hugging Pennywise. Yet you insist that it can't generate inappropriate images without having been specifically trained on inappropriate images? Why is that suddenly different?
The trainers taught it what Mr. Bean looks like and what Pennywise looks like - it took those concepts and combined them to create your image. To make CSAM it was, unfortunately, trained on CSAM …stanford.edu/…/investigation-finds-ai-image-gene…
Investigation Finds AI Image Generation Models Trained on Child Abuse

3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model's capabilities.