We have spotted quite a few students using generative AI in their essays this summer and applied standard academic misconduct proceedings, though in most cases the work was so bad they would've failed anyway.

Today I learned of one whose use was sufficiently extensive that they will fail their degree.

I am wondering if this is *the first time a student has failed a whole degree for using AI*? Would love to hear about other cases. If you want to tell me in confidence, my Session ID is in my Bio

@tomstoneham how did you detect the generative AI usage?

@andrei_chiffa
By reading it. Most student cheating is obvious to a specialist who knows what they have (and have not) been taught and has read millions of words of student attempts to write about it.

But there is a more fundamental point I need to write up in detail:

University teaching is basically a form of intensively supervised reinforcement learning on a carefully curated, small data set aiming to produce a specific capability.

It is obvious the AI didn't go to class *here*

@tomstoneham @andrei_chiffa
hu thats intresting ...
when I studied ( political science ) of course we had courses with preselected textes etc ... but in the work we where expected to go further then that. So that at the end what we talked and read about in class was only a tiny amount of the work I would present

Its intresting that none of you talks about the increasing workload studends have to fullfill. Espacially with inflation where most students i know work 2 jobs or more.
how does this effect the decision to use tools like chat gpt?

@generic @andrei_chiffa
Your two points are related. We don't penalise going *beyond* the set texts but we do require that the essay demonstrates understanding of what was actually taught.

We don't *require* going beyond the set texts because that rewards those who don't have jobs or caring responsibilities.