We have spotted quite a few students using generative AI in their essays this summer and applied standard academic misconduct proceedings, though in most cases the work was so bad they would've failed anyway.

Today I learned of one whose use was sufficiently extensive that they will fail their degree.

I am wondering if this is *the first time a student has failed a whole degree for using AI*? Would love to hear about other cases. If you want to tell me in confidence, my Session ID is in my Bio

@tomstoneham how did you detect the generative AI usage?

@andrei_chiffa
By reading it. Most student cheating is obvious to a specialist who knows what they have (and have not) been taught and has read millions of words of student attempts to write about it.

But there is a more fundamental point I need to write up in detail:

University teaching is basically a form of intensively supervised reinforcement learning on a carefully curated, small data set aiming to produce a specific capability.

It is obvious the AI didn't go to class *here*

@tomstoneham

So it's not as much generative AI detection as it is detection of the fact the student did not attend or even familiarise themselves with the class, correct?

@andrei_chiffa
It is easy to overlook how well trained on a large dataset academics are. We each read around at least a million words of student work on our specialist area every year. Some of us have been doing that for decades.

Our pattern recognition for 'not produced by a student' is pretty good 😆