OK, I will say this later and perhaps more polished-ly on my blog, but here goes:

I will refuse to review any academic paper that credits a chatbot in the acknowledgements. Why? For much the same reason that I would refuse to review a paper that studied conflict diamonds just because they were convenient.

The human cost is unacceptable:

https://www.theguardian.com/global-development/2026/feb/05/in-the-end-you-feel-blank-indias-female-workers-watching-hours-of-abusive-content-to-train-ai

The industry has no line against using the output of the Nazi CSAM generator and calling it "training data".

https://www.theguardian.com/technology/2026/jan/24/latest-chatgpt-model-uses-elon-musks-grokipedia-as-source-tests-reveal

https://www.theverge.com/report/870910/ai-chatbots-citing-grokipedia

Any marginal benefit that an individual scientist thinks they see is just saying, "Wow, the bus to the conference runs so much better on leaded gasoline."

‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI

Women in rural communities describe trauma of moderating violent and pornographic content for global tech companies

The Guardian