We have spotted quite a few students using generative AI in their essays this summer and applied standard academic misconduct proceedings, though in most cases the work was so bad they would've failed anyway.

Today I learned of one whose use was sufficiently extensive that they will fail their degree.

I am wondering if this is *the first time a student has failed a whole degree for using AI*? Would love to hear about other cases. If you want to tell me in confidence, my Session ID is in my Bio

@tomstoneham How can you be so sure of your detection? Machine learning can also incorrectly characterize submissions. Where is the openness and understanding for new technologies? If rote knowledge is really so important; why not move to in-class essays and oral exams? It sure seems like the academic freak out over new technologies is more of an indictment of inflexible educational policy than students violating an ancient honor code.
@tomstoneham I honestly wonder why we don’t see more real adaptions to change, instead of complaints and often inaccurate enforcement. Generative AI is here to stay; we really need better responses by Academica.

@awaterma
You have made a lot of assumptions there!!

We don't give credit for rote knowledge. Our marking criteria only mention understanding of the material taught, argumentation, structure, writing and referencing.

The tells are (1) not drawing on the material taught but other sources, (2) making up sources, (3) coherence over 1000s of words etc. (4) and a writing style of a level higher than the student produces in other work.

We always have an oral to check before imposing a fail mark.

@awaterma
And for what it is worth, I am far from 'freaking out' - we have discussed at length and are happy with the idea of using AI generated text as a basis, so long as it is then edited in such a way that it meets our academic standards.

This student failed because they cut and pasted it rather than used it as a source in an appropriate way.

@tomstoneham @awaterma

From the context of this thread it ultimately sounds like the students are taking the output as-is with no attempt to engage with the source material. A generated piece of writing should be penalized, regardless of if by a writing service or an AI. The point of education is to take in knowledge so you may benefit from it and not to just let machines/hired writers do everything.

@DukeCarge @tomstoneham maybe so, but how do you really know it’s generated? And how will people work with text generation in the future — it’s going to be important. If rote learning is really so necessary — test kids in ways they can’t “cheat.”

@awaterma @DukeCarge

I am puzzled ... who said rote learning was of any value at all in any sane educational system?

As to how do we know? Well, we know within the standards of proof required by our processes for detecting academic misconduct.

You might ask: how do we 'really know' anyone convicted of a crime was guilty? Balance of probabilities? Beyond reasonable doubt? These are the relevant concepts to be applying, depending upon the legal or procedural context.

@tomstoneham @awaterma
I don't use rote learning in my job at all now that you mention it. Academia readied me for how to take critical texts to the theory of what I am to apply as an engineer and how to make sense of the fundamentals to the systems that I use.

I couldn't tell you off the top of my head what is the effect of adding new pieces to a system without looking up the fundamentals again.

@DukeCarge @tomstoneham @awaterma yes! Plus, there's no real value to rote learning anymore anyway. I tell my junior mentees not to bother to intentionally memorize any fact. If you use it often enough to be worth memorizing, your brain is going to do it anyway without trying. What matters is the how and why. Learn those and you can put together any idea.

That's why the discussion of accuracy of "AI detectors" is silly. "Questioned student doesn't understand at level of essay" is very reliable.

@ATurnOfTheNut @DukeCarge @tomstoneham sure, but the trouble is with Universities and High Schools using inaccurate models to “find” essays created by A.I. That’s the inconvenient fact elided here. Even OpenAi’s latest classifier to detect “ai generated content” is only seeing “success rates” of 26% with 9% false positives. https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text
New AI classifier for indicating AI-written text

We’re launching a classifier trained to distinguish between AI-written and human-written text.

@awaterma @ATurnOfTheNut @tomstoneham
So you're agreeing that the manual methods by which students are caught not knowing the contents of their essay are effective.
No one in this thread has quoted use of these models but you.
@awaterma @ATurnOfTheNut @tomstoneham societal problems of academic dishonesty caused by new technology cannot be solved by tech.
Cheating always has and always will exist. It is a cultural issue that must be solved by educators and not by throwing more obscure and abstract technology stacks at them.
@DukeCarge @ATurnOfTheNut @tomstoneham definitely! Educators have a long list of existing tools to better check for this! I especially hope there’s a greater focus on in-class essays and oral exams — if rote knowledge is what needs to be measured. It’s those that aren’t interested in that type of hard work that scare me. https://www.washingtonpost.com/technology/2023/05/18/texas-professor-threatened-fail-class-chatgpt-cheating/
A professor accused his class of using ChatGPT, putting diplomas in jeopardy

A Texas A&M instructor falsely accused students of using ChatGPT to write essays, putting them at risk of failing.

The Washington Post
@awaterma @ATurnOfTheNut @tomstoneham why do you keep going back to rote. Everyone has already dismissed it.
You're speaking on a completely different subject and topic at this point.
@DukeCarge @ATurnOfTheNut @tomstoneham Feel free to start your own thread; I’m just replying after enjoying the holiday.
@awaterma @ATurnOfTheNut @tomstoneham no, my dude. That's your subject you want to talk about. Your points have been dismissed by the 3/4 people in this thread. No one wants to discuss these detection models with you here. I'm going back to work. Cheers.