Oh no globalcomix, not you too

Oh no globalcomix, not you too

GLOBAL COMIX AI POLICY:
https://globalcomix.com/ai-policy
What do you people think?
I'm still having a think about what to do - but I know where I'm leaning atm. It's a shame though, because GlobalComix is a revenue source for me, so I'd cut myself off from even more of my previous income and I'll be worse off.
@JenJen Are they just allowing "AI-assisted works"? They're not training AI on yours, right?
I would stay if it's generating money for you.
@lustycomic Yeah, the business mind is like: I should stay because it does earn me (much needed) money, but the other part of me is like:
I don't want to support any kind of platform that has a CEO who loves AI, and literally allows AI to be used in the "creation process"
And... I'm being cynical here, but, I just don't trust that they'll resist feeding all the comics into the theft machine. They have the data, why wouldn't they use it?

@JenJen @lustycomic if they're so inclined they will (or have) *anyway*. You may as well be able to eat occasionally.
At this stage there's going to be nowhere left that that isn't infected.
@JenJen That reads like a pretty responsible and pragmatic policy to me.
Forbidding purely AI content but accepting that AI tools are now ubiquitous and some people will use them seems to me to be sensible given where we currently are.
It fits well with my personal opinions: I abhor outright GenAI created content, but I don't object so strongly to a bit of content aware tool usage under direction by an artist. They're unlikely to uninvent those tools, so better to lay out how they can be used.
@gilester45 What about the source material the models are trained on? I have a huge problem with them being tools for remixing other people's work, without consent, attribution or payment.
The Globalcomix statement is pointedly silent on this aspect, too.
If they were talking about purely "mechanical" assistance, like an animator using an automated system for in-betweening, I'd find their statement entirely acceptable. That even seems to be how they're casting the use of these things.
@JenJen
@KatS @feff @JenJen It's a good question isn't it.
Tools that steal our work to recreate it in our style are evil, the industrialised theft committed by AI companies beggars belief in this regard.
However a tool that has "learned" how it should work by watching our creative output... that doesn't sound so very different to how a human might learn from creators they admire.
Is that enough to say it's ok? Or, perhaps "less unacceptable" is all we can hope for.
However a tool that has "learned" how it should work by watching our creative output
If and when such a tool exists, we can also discuss its rights as a sentient being.
However, what we have now are
Tools that steal our work to recreate it in our style
They remix existing artifacts. They do not learn, and they do not create. They launder the plagiarism of other people's creations. This is why Sam Altman and everybody else driving these buses are working so hard to remove copyright protections from people who actually create.
To emphasise: people (can) learn from others, and adopt and remix each others' ideas, perspectives and techniques. These diffusion models remix existing works, and that is very much not the same kind of thing.
So no, that's not enough to say it's OK. It's not a question of how you frame it, but a question of what these things do and how they're wielded.
One exception: if you have a private, local model that is trained exclusively on your own work, have at it.
@gilester45
There's a nuance that is often lost in the loudest voices.
Firstly AI nowadays is such an umbrella term that can refer to many different types of AI. Some actually useful and handy tools (in art terms, I think of algorithms that can identify subjects and speed up background removal, or suggest some shading).
However, the "it learns like a human" for GenAI ignores that these models don't learn as they do approximate and predict. They have no memory, nor do they understand the context (hence why AI hands and feet are notorious) like a human would, because humans develop a world model and diffusion models and LLMs do not have a world model.
Finally genAI is bad in it's current iteration because it's being used to replace human creators. That's how it's marketed and that's how most people are using GenAI. GenAI is not creative, it can't give us anything new, it can only give you a product that is normalised by everything it "knows", and the only reason that it may look new is due to the prior experience (or lack of) of the person experiencing the output.
> What do you people think?
That they say that they "[celebrate] human storytelling, art, and originality", and then go on to set out the myriad ways in which they welcome non-human content, all written in vague language.
@JenJen That seems, uhhmmm, confused. You can't post 'AI generated content', but you *can* use 'AI tools supporting creator workflows'.
So presumably if you have an AI tool which produces an image for you (such as this image I got some AI bot to do for me when #UKGov said they were banning posting pictures of people in small boats), that would be banned, but if I scribbled my signature on it, that would be good?