Fellow educators:

1) Don't do cop shit.
2) Don't pay for a service that does cop shit.
3) Don't trust claims of system performance without documentation.
4) DO educate your students about what ChatGPT is and why they wouldn't want to use it.

Screen caps from some spam I received just now:

@emilymbender β€œReliable” is doing a lot of dissembling there.
@emilymbender Both the images are the same? Was there meant to be another part of the email?
@emilymbender @asociologist and I discussed a great potential project: an analysis of all the ridiculous guidance that universities are providing to faculty wrt ChatGPT
@alex @emilymbender I wonder which universities are using ChatGPT to write their ChatGPT guidance. Commit to the bit, y'know?
Generative Artificial Intelligence | Center for Teaching Innovation

@alex @emilymbender They do much better in re: not using BS GPT detectors:

"Detection tools claim to identify work as AI generated, but cannot provide evidence for that claim, and tests have revealed significant margins of error. This raises a substantial risk that students may be wrongly accused of improper use of generative AI." https://teaching.cornell.edu/generative-artificial-intelligence/ai-academic-integrity

AI & Academic Integrity | Center for Teaching Innovation

@asociologist @alex @emilymbender I have experience in this area - I was kicked from a content writing website for using AI, with no evidence provided, at a time when I had only just heard of ChatGPT and certainly had never used it.
@asociologist @alex @emilymbender
Ach, I have seen supposedly expert people show a slide of (BS) potential uses for LLMs in some industry, and then gleefully reveal that it was written by an LLM. Apparently unaware that it was BS, just presenting it as though it was true. 🀦

@emilymbender 100% agreed. For better or worse, LLMs are the future.

Teachers must ensure students can own the work they submit. Presentations, pen-and-paper exams, and classroom participation need to reflect the same quality. In many ways, this is just like copying from peers or family.

@adamhotep @emilymbender

I'm probably gonna get reply-guyed for this, but no. Wrong. Instructors are not responsible for the work submitted. Unless they physically lose it. I did bad or plagiarized? That's on me. I'm the one submitting it. If I didn't understand the assignment, there's plenty of time to address that. I've had the syllabus since day one. The institutions of academia have problems. But do not categorically throw instructors under a bus driven by a computer.

@helplessduck @emilymbender You misunderstood me. I said the student "can own" not as a reference to originality but rather to defensibility, as in "can take ownership of" or "can defend". If the student is just regurgitating material from an LLM or from family, there's only so much a teacher can do to catch it unless the student cannot demonstrate an understanding of it.

@adamhotep @emilymbender

The sixth letter of the alphabet is suitable for anyone (student or otherwise) who cant stand up and defend their position. Full stop.
Authorial authority resides with the author. It's unreasonable to ask a hypothetically adjunct instructor with 1100 students to police that. This responsibility does not lie at the feet of any instructor. Students need to bring their A game, or get their GPA destroyed and go home.
Full stop.

@helplessduck and who supplies that F?

@adamhotep

Academia gives no more than I put into it. I supply that letter.

@helplessduck
You earn. The teacher supplies (assigns).
@emilymbender A monitoring system that throws an excess of false flags is worse than no system at all

@emilymbender to add on (you probably already know this; simply making others aware):

5) AI detection is a pseudoscience. LLMs imitates human writing, which makes it by definition impossible to detect.

@tomodachi94 Due, I know. See pinned toot.

@emilymbender I wasn't explaining it to you, I was adding on to your toot...

Sigh

@emilymbender I've edited my reply to make its intent more clear.

@emilymbender

"transformative results"
🚩🚩🚩

@emilymbender 100% a scheme to scrape students' work for training data. "Advanced algorithms" lmao

I'm sure they used a LLM to generate this email.

@emilymbender
"Are you tired of spending countless hours trying to discern if your students' work is genuinely their own?"
@josh @emilymbender β€œare you tired of doing your job?”
@emilymbender Haha. "Reliable AI detection" . . . did an AI write this?
@emilymbender Considering what I (and, I see, everyone else replying to this toot πŸ˜„) know about the "reliability" of AI detectors, I think making the claims in item #1 opens them up to a false advertising suit. I'd love to see someone bring that suit against them.
@emilymbender it's so sneaky that they ~imply~ the system never has false positives, without actually stating it. The entire third party leech market on LLM's and Detecting LLM's is just rancid all the way down ..

@emilymbender agree πŸ’―

AI writing is dumb as shit. If you have to run your student papers through AI checker, you were never reading them anyway.

@tanysfoster @emilymbender I misread this for a moment as "All writing is dumb as shit," and as a writer, I felt seen.
@emilymbender Classroom idea: after students turn in their homework, have an oral exam to check their knowledge of the topic against what they wrote. Catch them red-handed not knowing anything.
@emilymbender There's something wild to me about the idea of educators using "AI" to prevent their students from using "AI" to do assignments. The logic of it has some kind of irony and double standard to it that I'm not sure I can articulate well. Like, surely if it's unethical for the student to use ChatGPT to do the essay, then it's only fair that it also be unethical for the educator to use a machine learning tool to attempt to detect the use of ChatGPT.
@PumpkinSkink2 @emilymbender I don't think this is quite fair, though I certainly understand the feeling. It's a little like saying "Students aren't allowed to take books into the exam, so why are educators allowed to refer to them while marking?" The problem is with the tools making false promises and failing dangerously, rather than with their use making for some kind of double standard in itself.
@emilymbender sounds like another version of ACAB
@emilymbender this spam mail reads like it was written by ChatGPT lmao.
@emilymbender Why is it that I feel like this email is generated using an LLM...?
@emilymbender i quite honestly hope these type of companies get sued for libel and slander by the people it falsely accuses, and it's ceo's personally held responsible. This is just awful.

@emilymbender It seems kind of worrying to advertise such a tool as saving time for the lecturer.

If they're overworked, then they might not be inclined to think about what the false positive rate of the tool means.

@emilymbender all i read in this email is πŸ’¨

also stackoverflow is better than chatgpt. being able to talk to humans if you have to solve a problem is better in my opinion.

@emilymbender Ask them if will cover all legal liability for false positives. 🀣
@emilymbender Classic setup there.
1: create a problem
2: create a "solution" to the problem that ironically uses the same methods as the problem
3: charge for use of the "solution"
Hopefully people will collectively see through this garbage before they can reach the "profit" step!
@emilymbender we welcome our new robot protection racket overlords.
@emilymbender Didnt OpenAI already shut down their AI detector because it just did not work?
This sounds like a scam in a scam's coat
@emilymbender almost all chatgpt checkers have been found to wrongfully tag non ai works as ai (the bible, declaration of independance etc)

not only that, you can ask chatgpt to write differently and it always trows off chatgpt detectors

@emilymbender

β€œFoster authentic learning” hahahahah

@emilymbender "Reliable" detection

HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

The GPT Checker algorithm:

"Is this text unexpected?"
β€” No
"OK, then I probably made it up."

THIS IS LITERALLY THE ALGORITHM.
Lisa DeBruine πŸ³οΈβ€πŸŒˆ (@[email protected])

@emilynordmann gave a fascinating session today on the use of chatGPT in education. It's a tool that exists and students are going to use it, so how do we teach them to use it well and responsibly? Materials, links, and recording (posted soon): https://sway.office.com/p3A96vxXue9qqeFF

LGBTQIA+ Tech Mastodon

@emilymbender @baldur

Alt text: Chris tucker (actor) with a backwards baseball cap on his head - looks incredulous, despondently lowers his head into the palm of his hand, covering his face. Looks back up and shakes his head

@emilymbender

You are confused. I hear echos of when I was told as a kid that calculators were cheating, when they should have been teaching me how to build my own and how to use them well.

You should teach them how language models work and how to put them into their workflow.

The important part is the quality of the output, if you don't know what you are doing your output will still suck, even with a language model.

Language models are just Photoshop for language, that has implications, but if you avoid interfacing with them, you will be outclassed.

@BlueBee LOL Do you have any idea who you're talking to?

@emilymbender

"Do you know who I am?"

"Do better!"

Arguments people make when they have legs to stand on.

@[email protected] @[email protected] Your mastodon bio says "I'll strive for low volume, high quality posts.". Yet you chose to lead your response here with "You are confused"? Even in the most generous read, this is rude.

Do better.

@abucci @emilymbender

Correct that schools shouldn't waste money trying to figure out if students are cheating with chat GPT. Incorrect when you say they shouldn't use it period. Language evolves, this is actively useful.

The number of people I have talked to about chat GPT who are certain they know what it is but have not actually used it is... Nuts. A month of using it and one will find uses, and if they don't, they have no imagination. (Language models in general but currently chat GPT is the best that I know of)

With caveats about capture of language models and the dangers of that. Given that so many people love apple, and don't smack around Facebook or Google... I have little faith it won't be captured again.