AI's aren't sentient. They can't "steal."

Programmers and institutions select the data with which to train the model. They take art and writing from artists and authors without credit or payment. The software then remixes and mimics what it is given.

Displacing agency by attributing intent to the AI is exactly how people and institutions erase human action in the creation of technology. It also leads to further perceptions of technology as acultural, unbiased, and, in essence, magical.

@Manigarm This is an interesting point, and certainly correct.

It's also exactly how humans learn to become artists and writers - by studying, mimicking, and eventually adding to the existing body of work. We don't generally consider that theft, unless the copying is exact or deceptive.

Yet AI feels somehow different, much more like plagiarism. Perhaps it's that the ONLY input an ML system has is others' art, with no real-world human experience of its own to contribute.

@Manigarm I think part of it is that we expect art and literature to have a creator, an actual person whose work expresses a human point of view, one that encompasses something beyond the literal work itself. By lacking an author who stands behind it, is AI-generated art somehow inherently fraudulent? Maybe.
@Manigarm Is the person who runs an AI-based art generator and selects which ones are "good" any less an artist than Duchamp with his readymades?
@mattblaze
I think the labor and value-accrual/ownership analysises end up being much more useful than debating whether or not the outputs are art.
@Manigarm @mattblaze
@dymaxion @Manigarm They're related. Consider the two main ways (visual) artists are employed: as illustrators-for-hire (e.g., by publications) and as fine artists collected by rich investors. For the former, AI systems (currently) produce nice looking illustrations, but they require extensive selection before getting one that's an exact fit; it's probably cheaper & faster to just hire an artist. But for the latter, "Is it art" is a central issue for collector value.

@mattblaze
Honestly, a lot more artists do small scale sales in middle-class contexts than do sales to the rich, and in that context, their work is bought for a more utilitarian understanding of decorativeness+meaning, much closer to the illustration case. But what I mean around labor and value-accrual is on the other end. None of these systems work without the training data, and there derivative works that conversation compressed versions of that training data, and yet the value accrues entirely to an intermediary. Artists have the right to set their own rates for derivative works licensing. I'm not usually an IP maximalist, but there's a meaningful distinction between access to culture on an individual basis and the creation of a system intended to evolve to a point where the work of the people from which its creators are stealing is entirely replaced.

Something entirely based on theft from other people cannot be art. An original tuned prompt absolutely can have artistic merit, but the artistic merit and "artness" of the result rests almost entirely on a) the work of the ML engineering team, and b), much more heavily, on the source material. Now, there's of course a long tradition of artists working with general purpose software and having that software considered a tool or at most (e.g. with reactive projection mapping installations and touchdesigner or max) a medium, so we can ignore the first. However the last is and always will be an intrinsic component, massively more important than anything involved in prompt engineering. The prompt engineering has contributed almost none of the art and should receive almost none of the value.

It would be perfectly possible, if any of these companies cared, to create a fair licensing structure and modeling systems that could provide a proportional attribution distribution across the source material derived from each prompt, paid out in accordance with derivative work prices determined by the rights holders. Until they do this, it's nothing but theft.

@mattblaze
And yes, it's unclear if IP law, created and maintained as a tool to make money flow towards capital and as often as not weaponized against individual artists these days, will support this understanding. Certainly, the ML image generators are operating in the fine Valley tradition of "ignore the law until we're big enough to buy the laws we need". Pretending that this is reasonable conduct and something that can be overlooked to consider the product as though it did not occur in a context of theft by the ML data collection teams is unreasonable.
@dymaxion I agree that The Valley’s long history of rights-trampling bad behavior should absolutely make us especially skeptical here, and I think that was part of the original poster’s (quite valid) point. But even putting the equity issues aside, that still leaves fundamental questions of how we should approach machine-generated art. Assume for a moment all the input is public domain. Can the output be considered original? How can artists use this? Etc.
@dymaxion I’m not saying the equity questions aren’t important or worth exploring. Only that these systems also expose and amplify other vital, and fundamentally deep, questions of how we analyze (and what we value about) human creativity.