ChatGPT detection and algorithmic bias:

This afternoon James Zou directed me to a recent pilot study from his group in which they looked at the performance of seven different GPT-detectors that are sometimes used to flag cheating in educational settings.

They found that these detectors commonly misclassify text from non-native English speakers as being written by an AI. A primary driver appears to be the lower perplexity (exponent of model's loss) of such text.

https://arxiv.org/abs/2304.02819

GPT detectors are biased against non-native English writers

The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse. The published version of this study can be accessed at: www.cell.com/patterns/fulltext/S2666-3899(23)00130-7

arXiv.org

Ironically, these false positives are readily avoided by asking ChatGPT to rewrite the non-native English speaker's text to increase linguistic complexity.

In other words, the way for these speakers to avoid being accused to cheating is to actually cheat.

The take-home for higher ed is obvious and stark. Many (all?) current ChatGPT detectors have not been adequately assessed for issues of algorithmic bias and therefore should not be used to accuse students of misconduct in their written work.

@ct_bergstrom This is in maths ecosystem, but the same conclusion: detectors are cr*p, will become more cr*p, LLMs will become much more embedded (eg in Word, etc), disadvantage non-native English speakers. One suggested way out is to _embed_ LLMs into teaching rather than try to ban them. (Much like calculators.)
https://cesaregardito.substack.com/
Thoughts | Cesare G. Ardito | Substack

where i put things that are too long for a Twitter thread. Click to read Thoughts, by Cesare G. Ardito, a Substack publication. Launched 4 months ago.

@ct_bergstrom even without the biases, aren't many of these algorithms barely better than a coin toss?
@ct_bergstrom Thanks for sharing! I suspected this was true, great to see an article about it I can forward to my deans and other profs!
@ct_bergstrom The ENAI group just published the pre-print of their study of supposed detectors of AI-generated text with similar results: they don't work as advertised:
https://arxiv.org/abs/2306.15666
Testing of Detection Tools for AI-Generated Text

Recent advances in generative pre-trained transformer large language models have emphasised the potential risks of unfair use of artificial intelligence (AI) generated content in an academic environment and intensified efforts in searching for solutions to detect such content. The paper examines the general functionality of detection tools for artificial intelligence generated text and evaluates them based on accuracy and error type analysis. Specifically, the study seeks to answer research questions about whether existing detection tools can reliably differentiate between human-written text and ChatGPT-generated text, and whether machine translation and content obfuscation techniques affect the detection of AI-generated text. The research covers 12 publicly available tools and two commercial systems (Turnitin and PlagiarismCheck) that are widely used in the academic setting. The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text. Furthermore, content obfuscation techniques significantly worsen the performance of tools. The study makes several significant contributions. First, it summarises up-to-date similar scientific and non-scientific efforts in the field. Second, it presents the result of one of the most comprehensive tests conducted so far, based on a rigorous research methodology, an original document set, and a broad coverage of tools. Third, it discusses the implications and drawbacks of using detection tools for AI-generated text in academic settings.

arXiv.org
@WiseWoman This is very useful -- thank you!
@ct_bergstrom I ran an experiment writing a story with several "AI" programs, and found that their ability to extrapolate from given details far surpasses their ability to transition to new concepts or topics. Example: I started a character walking through the woods, continuing the walk for 20 lines. The program never varied. We would have walked forever if I had failed to present it with new setting and action. I might look at and around transitions to detect, which might skip that bias.
@ct_bergstrom GPT is trained to reproduce frequent sentences and non-native speakers too, no wonder. Also non-native speakers use automatic translation a lot and eventually just get used to it's "dialect". I suppose the training dataset is heavily contaminated by auto-translated texts. It was really frustrating to find out that GPT just fails to improve some sentences, which are looking too google-translateish even to me, and my English is rather poor. If you ask it to rephrase it gives more and more tortured variants, which are even worse than the original sentence.
@ct_bergstrom detecting AI using AI is the stupidest idea ever
@ct_bergstrom This would limit, but not necessarily exclude the tool's use.
@ct_bergstrom does that relate back to the low paid Nigerian moderators?