We have spotted quite a few students using generative AI in their essays this summer and applied standard academic misconduct proceedings, though in most cases the work was so bad they would've failed anyway.

Today I learned of one whose use was sufficiently extensive that they will fail their degree.

I am wondering if this is *the first time a student has failed a whole degree for using AI*? Would love to hear about other cases. If you want to tell me in confidence, my Session ID is in my Bio

@tomstoneham
For the student who failed their degree, was this an associates or some sort of 1-2yr certification? Doesn’t seem like such tools have been broadly available long enough to fabricate a 4yr degree. Or was the use so egregious they were booted from the program? Or maybe they were booted from a masters program, that would fit the timeline.
@josh
In the UK students have to pass each year of their degree before progressing to the next or graduating. There is a maximum number of fail marks they can carry in any year. This student hit that number.
@tomstoneham
Ouch. So quite pervasive with their cheating, eh? It almost seems like it would be worth offering them some redemption if they would submit to an interview regarding why they felt they could get away with it. Years ago I had a student who in desperation kept escalating the amount of plagiarism from web sources in their papers until it was unavoidably noticeable. Seems like there might be a shared mindset.

@josh
They were interviewed and gave a written response to the evidence presented. (We do take care in these matters!)

Denied it in general but couldn't say anything about specific points of evidence or explain the material in the essay.

@tomstoneham
Oh, I’m sorry if it felt implied that the process was careless. Not at all my intent. I just meant to gather information on misuse of an emergent tool for preventative measures. It seems that it would be hard to argue that they were ignorant of the impropriety, but the rationale which carried them through the violation might be illuminating

@josh @tomstoneham Not exactly on topic, but this might interest you, https://social-epistemology.com/2023/03/29/24-philosophy-professors-react-to-chatgpts-arrival-part-i-ahmed-bouzid/.

A two part interview of two dozen American(?) philosophy professors about ChatGPT's arrival. Some serious caveats and considerations, but also a hint of maybe reckless techno-optimism. Outsourcing invaluable parts of deep learning "because we can" worries me in particular.

24 Philosophy Professors React to ChatGPT’s Arrival, Part I, Ahmed Bouzid

For someone like myself who makes their living in the field of Human Language Technology, two dates from the past decade or so have stood as watershed moments in that field: October 4, 2011 when Apple’…

Social Epistemology Review and Reply Collective