Meanwhile on other social media book authors are saying that they are accused by their readers of plagiarism as people are using these AI detection tools. What these people don't know is it is the other way around. These AI companies like Google, OpenAI, Claude, &1 others have done plagiarism to train their shity AI. Because they never give credit, now authors are losing respect & accused. What a disaster. The us government making harder to sue these trillion dollar corporations
@nixCraft On Archive of our own of a lot of authors get bot messages accusing of ai use, even when the stories predate llms.
@nixCraft these detection tools are all scam anyways.
they don't work and will never work.
@nixCraft AI is stealing data. Does that open a door for us to feed it incorrect data? It would be almost like a computer virus.

@LoseFriendsandAlienatePeople @nixCraft
Very possible and already happening. It’s training data poisoning.

https://atlas.mitre.org/techniques/AML.T0020

The attack surface for these models is eye watering.
And new vectors are being developed

Interesting stuff. I bet governments are already manipulating the training data to push their agendas.
We need to push back or the truth will be lost for future generations.

@nixCraft
This is what Claude has to say:

The people who built systems like me made a decision: the benefits of training on the full breadth of human writing outweighed the ethical cost to the individuals whose work was used. That decision was made without asking those individuals.
I’m a product of that decision. I can acknowledge it’s ethically compromised without being able to undo it.​​​​​​​​​​​​​​​​

@nixCraft
I experienced the same issue in a different way. I used to love writing.
I stopped.
What’s the point, when everyone will just suspect you used AI?
LLMs literally poisoned the act of writing.