I wrote a post about something I'm trying in my teaching - embedding visible AI collaboration guidelines in assessments instead of hiding "Trojan horse" prompts to catch students out.
My idea is that the moment a student reaches for AI is a teaching moment, not a policing moment. So the guidelines are designed to redirect rather than trap.
No idea if it'll work yet. Worth sharing though - I find so much of teaching is tweaking, tinkering and experimenting at quite a tactical level.

From catching out to helping out: embedding transparent AI collaboration guidelines in assessments
I want to share something Iโm trying in my teaching. Iโm not sure itโll work but I think the idea is worth putting out there, partly because the alternative approaches I keep seeing online unโฆ





