Oh no globalcomix, not you too

Oh no globalcomix, not you too

GLOBAL COMIX AI POLICY:
https://globalcomix.com/ai-policy
What do you people think?
I'm still having a think about what to do - but I know where I'm leaning atm. It's a shame though, because GlobalComix is a revenue source for me, so I'd cut myself off from even more of my previous income and I'll be worse off.
@JenJen That reads like a pretty responsible and pragmatic policy to me.
Forbidding purely AI content but accepting that AI tools are now ubiquitous and some people will use them seems to me to be sensible given where we currently are.
It fits well with my personal opinions: I abhor outright GenAI created content, but I don't object so strongly to a bit of content aware tool usage under direction by an artist. They're unlikely to uninvent those tools, so better to lay out how they can be used.
@gilester45 What about the source material the models are trained on? I have a huge problem with them being tools for remixing other people's work, without consent, attribution or payment.
The Globalcomix statement is pointedly silent on this aspect, too.
If they were talking about purely "mechanical" assistance, like an animator using an automated system for in-betweening, I'd find their statement entirely acceptable. That even seems to be how they're casting the use of these things.
@JenJen
@KatS @feff @JenJen It's a good question isn't it.
Tools that steal our work to recreate it in our style are evil, the industrialised theft committed by AI companies beggars belief in this regard.
However a tool that has "learned" how it should work by watching our creative output... that doesn't sound so very different to how a human might learn from creators they admire.
Is that enough to say it's ok? Or, perhaps "less unacceptable" is all we can hope for.
@gilester45
There's a nuance that is often lost in the loudest voices.
Firstly AI nowadays is such an umbrella term that can refer to many different types of AI. Some actually useful and handy tools (in art terms, I think of algorithms that can identify subjects and speed up background removal, or suggest some shading).
However, the "it learns like a human" for GenAI ignores that these models don't learn as they do approximate and predict. They have no memory, nor do they understand the context (hence why AI hands and feet are notorious) like a human would, because humans develop a world model and diffusion models and LLMs do not have a world model.
Finally genAI is bad in it's current iteration because it's being used to replace human creators. That's how it's marketed and that's how most people are using GenAI. GenAI is not creative, it can't give us anything new, it can only give you a product that is normalised by everything it "knows", and the only reason that it may look new is due to the prior experience (or lack of) of the person experiencing the output.