Last term, I had a final assignment option having students use #ChatGPT to write their final essay, then critique the results. It was great, and I'll be doing it again. Everyone in #academia should do something like this with their class if they can. Short 🧵 on what we found.
First, the accuracy problem was even worse than expected, especially concerning sources. Unsurprisingly, when students asked it to supply scholarly references, they often (7 of 9 times) included fakes. What WAS surprising was how plausible many of these fakes were. 2/
Real citations were included with fake ones, and fake ones typically included plausible titles published in real, relevant journals - making it impossible to detect they were fakes without some digging. Sometimes it even attached the names of real scholars to fake articles. 3/
In one case, it listed a prominent historian of medicine who works on drugs as the author of "Big Pharma and the Rise of Gangster Capitalism." Seemed legit, I thought, and almost skipped over it. But surprisingly gutsy for them. Hmm... Alas, no such article actually exists. 4/
A more heartening surprise was that most of the students who did this thoughtfully criticized not just the obvious concern about accuracy, but the relatively superficial and simplistic arguments that ChatGPT made. 5/
Ofc, many of the things they were pointing to as flaws in what #AI produced are typical problems in student writing. But it seemed like this exercise encouraged them to think more deeply about good argument and good writing, and why these are valuable to humans. One hopes. /fin
@jfballenger "AI won't replace people, but people who know how to use AI will" is something I'm hearing more of these days. Good for you to help students learn how to use AI yet still recognize it's limitations. I think chat gpt can be an amazing writer and time save but as a researcher it's horrible and a time waste.