We have spotted quite a few students using generative AI in their essays this summer and applied standard academic misconduct proceedings, though in most cases the work was so bad they would've failed anyway.

Today I learned of one whose use was sufficiently extensive that they will fail their degree.

I am wondering if this is *the first time a student has failed a whole degree for using AI*? Would love to hear about other cases. If you want to tell me in confidence, my Session ID is in my Bio

@tomstoneham How are you detecting the LLM use? I've seen references that the tools used to detect them are not entirely reliable (maybe 30% false positive?). Of course, bad writing is still bad writing, as you said

@mikeg
By reading what they produced!

When you read >100,000 words of student produced work in your specialist area every year, you can spot something fishy.

We also interview before making a final decision and when they cannot even answer the most basic questions on what they have written ...

It is no different from detecting any other form of plagiarism.

@mikeg @tomstoneham the detectors are terrible! But much like other plagiarism cases, there is often a disconnect between paragraphs or abrupt changes of tone and missed references.

@jnyrose @mikeg
Experienced academics are much more reliable than tech solutions.

We are highly trained pattern detectors for 'not written by a student' 😆

@tomstoneham @mikeg right?! And in the end, you can always ask them to explain their thinking. I imagine that would trip up those relying on AI for their thinking a bit.