As many suspected:

“Midjourney Founder Admits to Using a ‘Hundred Million’ Images Without Consent”

https://petapixel.com/2022/12/21/midjourny-founder-admits-to-using-a-hundred-million-images-without-consent/

Midjourney Founder Admits to Using a 'Hundred Million' Images Without Consent

It has outraged artists and photographers.

PetaPixel
@Riedl CLIP did too though.
@moultano they all did as far as I know, but some are more secretive than others so confirmation is good to report.
@Riedl @moultano It is funny to see OpenAI just not mention it and reap the benefits. I've had people sincerely praise OpenAI for how it treats artists relative to Stability, which is imo just a hilarious PR gap.
@moultano I'm getting probably close to using 1 million images with my brain, personally. Is that better or worse?
@Adverb I don't know. I'm mostly staying out of the ethics of all this because I don't want to be subpoenaed in some future litigation against my employer, and because I find it all genuinely confusing.
@Adverb Every analogy people make requires AI art to be "like" something else, and we're just arguing about which thing it's "like." But I don't think it's like anything else.

@moultano it's fair to say that analogy is not exact here (Dryhurst talks about this too), though I think the principle is the same on many fronts.

The scale could never be ofc.

@Adverb I would be happier if the models trained with differential privacy. I think that would be closer to the norms of inspiration that artists expect from each other.

@moultano
To the end of preventing memorization?

I feel like it's already past human artists in some ways by having a similarity-queryable dataset if people are worried.

And that post by OpenAI on deduplication and (ironically) that one on Stable Diffusion's regurgitation that notes the imagenet LDE sees no significant memorization make me very not-worried about memorization.

@Adverb @moultano I should already know those, but if you wanted to toss in a couple of links I would bookmark them.

@TedUnderwood @moultano definitely: https://openai.com/blog/dall-e-2-pre-training-mitigations/
This one is a big deal!!!

https://arxiv.org/abs/2212.03860
And this one is frustrating but ironically reassuring given that they see dataset seems to be the primary problem/mitigator.

DALL·E 2 Pre-Training Mitigations

In order to share the magic of DALL·E 2 with a broad audience, we needed to reduce the risks associated with powerful image generation models. To this end, we put various guardrails in place to prevent generated images from violating our content policy. This post focuses on pre-training mitigations,

OpenAI

@TedUnderwood @moultano They LITERALLY CANNOT DETECT REPLICATION WITH IMAGENET!!! (Pardon my screaming.)

But nobody bothers reading the paper 🙃

@Adverb @moultano @TedUnderwood Yes, the framing in the abstract and intro doesn’t really match the results.

@lowd @moultano @TedUnderwood Yeah! Not to mention the model card for stable diffusion explicitly states this issue!

But due to this paper even some many many people think this was some hidden secret. Was having to argue over this with an ML industry person just yesterday.

@TedUnderwood @moultano DALL-E2 usage also reassures me a ton here.

Otoh I'd be down to see people do differential privacy here: it just feels like enough just will not be on the data sourcing side :/

@Adverb @moultano Yes, actually I do remember reading that paper and thinking "hmm — if I'm reading this rightly it's only a problem for small models." But no one talked about it that way so I thought I was crazy.
@TedUnderwood @moultano at this point it's a tidal wave of "For real?" wherever I go :(
@Adverb @moultano And then in the last section they introduce a concept of "style copying" which seems to muddy the waters rather a lot.
@TedUnderwood @moultano Yeah, I have viewed LAION too many times to buy that one as a fair standard for any entity.
@Riedl well if it was only a hundred million…
@Riedl *pretends to be shocked*
@Riedl isn’t that the same as reading all books of the world in order to become able to write? Would we call that copyright infringement, too? Or is the AI actually using parts of the art it trained on its creations?

@danvanmoll
That is a plausible argument that sort of aligns with how the models work (though anthropomorphizes as well). I think that most generated art will not rise the the level of copyright infringement. At the same time artists are justified in their anger when their art is used without permission.

IMO we need to separate the implications of input/training and implications of output/generation.

@Riedl Suspected? This has always been out in the open. All the big image datasets have copyrighted images in them that wasn't given permission for, even those that try not to.

All the big AI companies think this is fair use. Even the big owners (Disney etc.) seem to not want to question that.

@Riedl I think it might help if more people understood exactly how generative AI works?

This seems like a pretty good primer...

https://i.redd.it/2f00l6vsso6a1.jpg