We have spotted quite a few students using generative AI in their essays this summer and applied standard academic misconduct proceedings, though in most cases the work was so bad they would've failed anyway.

Today I learned of one whose use was sufficiently extensive that they will fail their degree.

I am wondering if this is *the first time a student has failed a whole degree for using AI*? Would love to hear about other cases. If you want to tell me in confidence, my Session ID is in my Bio

@tomstoneham How can you be so sure of your detection? Machine learning can also incorrectly characterize submissions. Where is the openness and understanding for new technologies? If rote knowledge is really so important; why not move to in-class essays and oral exams? It sure seems like the academic freak out over new technologies is more of an indictment of inflexible educational policy than students violating an ancient honor code.

@awaterma
You have made a lot of assumptions there!!

We don't give credit for rote knowledge. Our marking criteria only mention understanding of the material taught, argumentation, structure, writing and referencing.

The tells are (1) not drawing on the material taught but other sources, (2) making up sources, (3) coherence over 1000s of words etc. (4) and a writing style of a level higher than the student produces in other work.

We always have an oral to check before imposing a fail mark.