Last term, I had a final assignment option having students use #ChatGPT to write their final essay, then critique the results. It was great, and I'll be doing it again. Everyone in #academia should do something like this with their class if they can. Short đź§µ on what we found.
First, the accuracy problem was even worse than expected, especially concerning sources. Unsurprisingly, when students asked it to supply scholarly references, they often (7 of 9 times) included fakes. What WAS surprising was how plausible many of these fakes were. 2/
Real citations were included with fake ones, and fake ones typically included plausible titles published in real, relevant journals - making it impossible to detect they were fakes without some digging. Sometimes it even attached the names of real scholars to fake articles. 3/
In one case, it listed a prominent historian of medicine who works on drugs as the author of "Big Pharma and the Rise of Gangster Capitalism." Seemed legit, I thought, and almost skipped over it. But surprisingly gutsy for them. Hmm... Alas, no such article actually exists. 4/
A more heartening surprise was that most of the students who did this thoughtfully criticized not just the obvious concern about accuracy, but the relatively superficial and simplistic arguments that ChatGPT made. 5/
Ofc, many of the things they were pointing to as flaws in what #AI produced are typical problems in student writing. But it seemed like this exercise encouraged them to think more deeply about good argument and good writing, and why these are valuable to humans. One hopes. /fin
@jfballenger "AI won't replace people, but people who know how to use AI will" is something I'm hearing more of these days. Good for you to help students learn how to use AI yet still recognize it's limitations. I think chat gpt can be an amazing writer and time save but as a researcher it's horrible and a time waste.
@jfballenger thanks for sharing. A very good idea to use it in this context (rather than merely seeing it as a threat to academic integrity) and very encouraging that your students seem to benefit from the exercise. I will be trying this in one of my classes.
@CaryaMaharja Yep - exactly! Hope you have a good experience with it.
@jfballenger did anyone cheat and instead have ChatGPT critique an essay they actually wrote?
@mwyman Not that I know of, but that too might be an interesting exercise.

@jfballenger i’m puzzled that the people who invented ChatGPT were unable to write a program that required the AI to only use factual information instead of making up nonsense.

And, once the reputation of the company was hit hard by the fact that its AI regularly produced total BS, that they still haven’t fixed it, and it’s still making waves

@tolortslubor @jfballenger Well, the technology inherently has nothing to do with factual information. The core of it is not a chatbot, or a library of knowledge. It does not have any well-defined or measurable/punishable way to "know" anything. It's a word prediction algorithm. Like what's at the top of mobile keyboards. It just happens to be fed a lot of data and trained with supercomputers, and uses some clever tricks to be really good at this.

They probably could try to make something to write factual information, but ultimately that's not what a "large language model" does. The goal is to make coherent language, not accurate information. And this coherent language still isn't perfect (though, it's really good)

The fact that it has genuinely accurate "knowledge" is kind of just a coincidence. Really, everything it says is made up, but factual statements really are more common in the wild than nonsense statements.

@jfballenger @marick: Nice!

I will be interested to see where this goes. Right now it feels like when Wikipedia first started regarding using it for actual information.

Given the premise behind Wikipedia and how these LLMs get their information…??