| Website | https://users.dcc.uchile.cl/~jsimmond/ |
| Location | Chile |
| Pronouns | she/her |
| Website | https://users.dcc.uchile.cl/~jsimmond/ |
| Location | Chile |
| Pronouns | she/her |
I see people thought I was referring to how Elon has run Twitter and I’m referring to Tesla. Tesla factories are a hotbed of racist abuse and the company has low integrity around basic things like driving range.
Buying a Tesla today implies one is OK with the company’s misbehavior.
https://www.cnn.com/2023/09/28/business/tesla-eeoc-lawuit/index.html
Nobody should be using GPT detectors for anything important.
From a recent study that found that GPT detectors were misclassifying writing by non-native English speakers as AI-generated 48-76% of the time (!!!), compared to 0%-12% for native speakers.
https://www.aiweirdness.com/dont-use-ai-detectors-for-anything-important/
I've noted before that because AI detectors produce false positives, it's unethical to use them to detect cheating. Now there's a new study that shows it's even worse. Not only do AI detectors falsely flag human-written text as AI-written, the way in which they do it is biased. This is
ChatGPT detection and algorithmic bias:
This afternoon James Zou directed me to a recent pilot study from his group in which they looked at the performance of seven different GPT-detectors that are sometimes used to flag cheating in educational settings.
They found that these detectors commonly misclassify text from non-native English speakers as being written by an AI. A primary driver appears to be the lower perplexity (exponent of model's loss) of such text.
The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse. The published version of this study can be accessed at: www.cell.com/patterns/fulltext/S2666-3899(23)00130-7