AI's aren't sentient. They can't "steal."

Programmers and institutions select the data with which to train the model. They take art and writing from artists and authors without credit or payment. The software then remixes and mimics what it is given.

Displacing agency by attributing intent to the AI is exactly how people and institutions erase human action in the creation of technology. It also leads to further perceptions of technology as acultural, unbiased, and, in essence, magical.

@Manigarm This is an interesting point, and certainly correct.

It's also exactly how humans learn to become artists and writers - by studying, mimicking, and eventually adding to the existing body of work. We don't generally consider that theft, unless the copying is exact or deceptive.

Yet AI feels somehow different, much more like plagiarism. Perhaps it's that the ONLY input an ML system has is others' art, with no real-world human experience of its own to contribute.

@Manigarm I think part of it is that we expect art and literature to have a creator, an actual person whose work expresses a human point of view, one that encompasses something beyond the literal work itself. By lacking an author who stands behind it, is AI-generated art somehow inherently fraudulent? Maybe.
@Manigarm Is the person who runs an AI-based art generator and selects which ones are "good" any less an artist than Duchamp with his readymades?

@mattblaze @Manigarm I think photography is another useful reference--the photographer doesn't create the imagery they capture from nothing, but they choose what to photograph, tuning parameters of the camera (which they probably didn't build either), making tweaks to an image after the fact, etc.

We don't have as much trouble assigning a creator in those instances--are the AI designers like the camera makers? are the images input into the model like the made objects that appear in a photo?

@zalcarik @Manigarm Yes, I think photography is a good example. When I make a photograph (and I use the word "make" quite deliberately instead of "take"), I'm trying to produce art. We can argue about whether it's good art or bad, but there's no longer any serious question, here in the 21st century, done with intention, that photographs can be art.

Why is selecting the input and curating the output of an AI system any different?

@zalcarik @Manigarm @mattblaze because the input sources are different - with a camera, you have to find or make something to aim the camera at; with AI generation, you’re amalgamating the set of training images (which were already found and/or made by other people). If you want to treat them similarly, AI generation should follow the same rules about including others’ work in the training set, and at best we don’t have documentation of that being the case
@ShadSterling @mattblaze Suppose I want a photograph of, say, a mountain framed by tree branches. I could look at parks on google, see other people have taken such photographs at a specific place, go to the park, and take my own photograph. I had to find something, sure, but I used other people to do that--nothing I found hadn't, in it's general nature, not been found before. My photograph will still be different, influenced by my own actions but also the random vagaries of nature.
@ShadSterling That strikes me as fairly analogous to asking the AI for a "picture of a mountain framed by tree branches"--what it produces will be influenced by what came before, but the random nature of what it shows to me will be unique, and I retain an ability to curate and fiddle with the results it presents to me. (Certainly as it pertains to my own participation and authorship in the process)
@zalcarik but the AI can’t take a picture of the real world, all it can do is create derivative works based on the pictures in its dataset. It’s more analogous to you overlaying existing images and adjusting the result by tuning how strongly each appears in the result - and claiming that image is originally yours without crediting the creators of the existing images. And it wouldn’t include any actual change in the view from the park, as a new picture on site would

@ShadSterling I think that's a common misconception, these AI models almost never work in a way analogous to that. To carry on the analogy about finding a photo spot (where "I" am acting like the AI now), it would be if I looked at the google image results to learn what the place looked like, then stepped over to a canvas and painted a picture of what I remembered.

A novel work is being created, but yes also one that is intrinsically derivative of the work of others.

@ShadSterling But that's pretty on par with what human artists need to do. Consider not the scenic photo spot, but instead something like a dragon--there's no real-world example an AI or human could draw from, they need to make reference to existing art.

I think that's all still mostly tangential to the issue of authorship and creative/artistic input. The AI/camera are backboxes, that take inputs and allow for creative input in turning those inputs to outputs.

@zalcarik it’s more elaborate than my analogy, but that doesn’t make it not analogous; the best introview I’ve seen is https://youtu.be/1CIpzeNxIhU . There’s no experience, no context, no story, no mental model of growth or movement or weather, just numerical calculation with a large number of tuning parameters set by the prompt.
How AI Image Generators Work (Stable Diffusion / Dall-E) - Computerphile

YouTube
@zalcarik If you were to paint a branch, you would draw on a lifetime of seeing them in trees and on the ground, picking them up, maybe building things with them, maybe whittling them and so on. You have mental models of how they grow, how they bend, the different between wet and dry, maybe differences between different plants, and so on. Far more than could be encoded in images alone, or be included in this kind of AI
@zalcarik if you were to paint a dragon, you might not have the same experience, but you could have in mind a mental model of how a dragon physically moves, of its skeleton and muscles and mind, of the physics of flight, and beyond that of the context of the picture, the story in which the dragon lives, and who else lives there. I’ve never heard of any AI coming anywhere near that kind of creative process
@zalcarik All these AIs can work with is the training data. Anything recognizable in their output is a result of deriving it from the training data and nothing else. And you can see that in the parts of the images that don’t make sense. We could be creating these as tools for artists, to expand art, figuring out how to share credit (and payments) between the creators of the training images and the prompt writers, treating them like the collaborations they are, but that’s not what we’re doing
@zalcarik (“introview” came from indecision between “introduction” and “overview”, but I kindof like it)