So, in addition to being biased against neurodivergent writers, "GPT detectors are biased against non-native English writers" https://arxiv.org/abs/2304.02819v2.

Cool. Cool cool cool cool. Tight tight tight. Cool. 😑​

GPT detectors are biased against non-native English writers

The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse. The published version of this study can be accessed at: www.cell.com/patterns/fulltext/S2666-3899(23)00130-7

arXiv.org

How many times, in how many ways, do i have to say "trying to come up with dispositive definitions or tests for humanness which include everything you want to include and exclude everything you want to exclude is both so difficult as to be functionally impossible, also fundamentally supremacist"?

Belief in the dispositive efficacy of turing style tests is a category error. "Proving consciousness" or "humanness" is not what turing intended his test for, and the fact that they've been consistently misunderstood that way does real harm to real people alive today.

Have a good one

You can essentially watermark "A.I." generated text via steganographic rules about how the ststem should choose words and form sentences. It would make for some odd constraints and sentence structures, but it could be done, and in such a way the the undoing if it would require either a) learning enough about the topic in question to adequately rewrite the text or, b) make the undoing itself obvious.

But even then, that doesn't get at the underlying problems of motivations and values which drive people to either a) cheat/plagiarize, b) set up such antagonistic pedagogical frames that they need a dispositive "gotcha" ready at all times, or c) fail to recognize the (1) the variously socially constructed disciplary and normative linguistic requirements which might cause someone to write in a particular way and (2) the harm done by the rigid enforcement of said same norms without particular care to the needs and circumstances of the individual in front of them.

And THAT is the point, here.

Once again: you can't technofix your way out of sociocultural problems.

I'm getting offline for a while.

@Wolven Preach. The point of the Turing Test, as I see it, was to raise questions about the utility of separating “human” from “non-human”, and to challenge our ideas of empathy. Do Androids Dream of Electric Sheep? was the best take on the test I’ve seen.

Using it prescriptively to ensure people are sufficiently human is some next-level bullshit.

@Wolven can you recommend further reading on this topic?
@Wolven The thing with Turing tests, which nowadays are mostly "prove you are not a bot", is that they can go both ways. Their bots want to devour our content to monetize it.
Ʈõ đó τᏂᾆƭ, ʈɧἑɏ ӎűƨʈ bέ ᾅblἑ ʈõ ŕĕãď ɪʈ.