Generative AI, plagiarism, and “cheating”

Back in January, I wrote a post called Beyond Cheating, reflecting on the ChatGPT bans that were rolling out across various Australian states and the "cheating" narrative that had accompanied the chatbot since its release. In that earlier post, I argued that banning and blocking generative AI would only contribute to the digital divide - students who have greater access to digital technologies would inevitably be able to access and use GAI, putting those who rely on in-school technology […]

https://leonfurze.com/2023/09/20/generative-ai-plagiarism-and-cheating/

Wie setzen Lehrkräfte formatives Assessment ein – und worin unterscheiden sich Expert:innen und Noviz:innen?

Groß et al. (2025) zeigen:
🔹 Expert:innen benennen klarere Lernziele
🔹 Verknüpfen Diagnosen konsequent mit Unterricht
🔹 Strukturieren Lernpfade auf vorhandenen Ressourcen

Zur Publikation geht's unter https://www.leibniz-ipn.de/de/forschen/publikationen/how-do-expert-and-novice-teachers-monitor-and-enhance-student-understanding?show_navhelper=1

#Forschung #Bildungsforschung #FormativeAssessment #Matheunterricht #Lehrerbildung #Unterrichtsqualität #Lernen

Generative AI in Physics?

As a new academic year approaches we are thinking about updating our rules for the use of Generative AI by physics students. The use of GenAI for writing essays, etc, has been a preoccupation for many academic teachers. Of course in Physics we ask our students to write reports and dissertations, but my interest in what we should do about the more mathematical and/or computational types of work. A few years ago I looked at how well ChatGPT could do our coursework assignments, especially Computational Physics, and it was hopeless. Now it’s much better, though still by no means flawless, and now there are also many other variants on the table.

The basic issue here relates to something that I have mentioned many times on this blog, which is the fact that modern universities place too much emphasis on assessment and not enough on genuine learning. Students may use GenAI to pass assessments, but if they do so they don’t learn as much as they would had they done the working out for themselves. In the jargon, the assessments are meant to be formative rather than purely summative.

There is a school of thought that has the opinion that formative assessments should not gain credit at all in the era of GenAI since “cheating” is likely to be widespread. The only secure method of assessment is through invigilated written examinations. Students will be up in arms if we cancel all the continuous assessment (CA), but a system based on 100% written examinations is one with which those of us of a certain age are very familiar.

Currently, most of our modules in theoretical physics in Maynooth involve 20% coursework and 80% unseen written examination. That is enough credit to ensure most students actually do the assignments, but the real purpose is that the students learn how to solve the sort of problems that might come up in the examination. A student who gets ChatGPT to do their coursework for them might get 20%, but they won’t know enough to pass the examination. More importantly they won’t have learnt anything. The learning is in the doing. It is the same for mathematical work as it is in a writing task; the student is supposed to think about the subject not just produce an essay.

Another set of issues arises with computational and numerical work. I’m currently teaching Computational Physics, so am particularly interested in what rules we might adopt for that subject. A default position favoured by some is that students should not use GenAI at all. I think that would be silly. Graduates will definitely be using CoPilot or equivalent if they write code in the world outside university so we should teach them how to use it properly and effectively.

In particular, such methods usually produce a plausible answer, but how can a student be sure it is correct? It seems to me that we should place an emphasis on what steps a student has taken to check an answer, which of course they should do whether they used GenAI or did it themselves. If it’s a piece of code to do a numerical integration of a differential equation, for example, the student should test it using known analytic solutions to check it gets them right. If it’s the answer to a mathematical problem, one can check whether it does indeed solve the original equation (with the appropriate boundary conditions).

Anyway, my reason for writing this piece is to see if anyone out there reading this blog has any advice to share, or even a link to their own Department’s policy on the use of GenAI in physics for me to copy adapt for use in Maynooth! My backup plan is to ask ChatGPT to generate an appropriate policy…

#assessment #chatgpt #copilot #education #formativeAssessment #genai #generativeAi #physics #summativeAssessment #theoreticalPhysics

Generative artificial intelligence - Wikipedia

Approaching Examinations

We’re in Week 9 of teaching in the Autumn Semester at Maynooth University, which means we’ve got one eye on the forthcoming Examination Period, which starts on 10th January 2025. Examination papers have already been prepared in draft form, and are now being checked ahead of printing. A draft examination timetable has also been released to staff, but not yet to students in case it has to be revised because of clashes.

I’m still on schedule with both my modules to finish the actual content in time to do use the last week for revision classes, going through past examination papers and generally helping the students prepare for the ordeals of January. There is a continuously-assessed component of both my modules, which counts 20% of the overall grade. One purpose of these assignments is to give the students some practice at the sort of problems they might encounter in the examinations: if they can do the assignments, they shouldn’t be too fazed by the examination questions. The purpose of the coursework is not just about passing examinations, however. I think the only way really to learn about mathematical physics is by doing it; the coursework is at least as important as the lectures and tutorials in terms of actually learning the subject. I think that modern higher education involves drastic over-assessment. Too much emphasis on grades and scores can be detrimental to real learning, but assessment that is formative can be extremely beneficial. Continuous assessment provides a way to give feedback to students on how they are doing, and to lecturers on how well the message is getting across; giving grades to such coursework is really just an incentive to the students to do it. It’s not primarily intended to be summative.

Anyway, back to examinations. One big difference between our examinations in Theoretical Physics in Maynooth and those at other institutions at which I’ve taught (in the UK) is that most of the papers here offer no choice of questions to be answered. Elsewhere it is quite common to find a choice of two or three questions from four or five on the paper. In my module on Differential Equations and Complex Analysis, for example, there are four questions on the examination paper and students have to do all of them for full marks.

One  advantage of our system is that it makes it much harder for students to question-spot in the hope that they can get a good grade by only revising a fraction of the syllabus. If they’re well designed, a few longish questions can cover most of the syllabus for a module, which they have to in order to test all the learning outcomes. To accomplish this, questions can be split into parts that may be linked to each other to a greater or lesser extent in order to explore the connections between different ideas, but also sufficiently separate that a student who can’t do one part can still have a go at others. With such a paper, however, it is a  dangerous strategy for a student to focus only on selected parts of the material in order to pass.

As an examiner, the Maynooth style of examination also has the advantage that you don’t have to worry too much if one question turns out to be harder than the others. That can matter if different students attempt different questions, as students might be penalized if they chose a particularly hard one, but not if everyone has to do everything.

But it’s not just the number of questions that’s important, it’s the duration. I’ve never felt that it was even remotely sensible for undergraduate physics examinations to be speed tests, which was often the case when I was a student. Why the need for time pressure? It’s better to be correct than to be fast, I think. I always try to set examination questions that could be done inside two hours by a student who knew the material, including plenty of time for checking so that even a student who made a mistake would have time to correct it and get the right answer. If a student does poorly in this style of examination it will be because they haven’t prepared well enough rather than because they weren’t fast enough.

#assessment #Examinations #FormativeAssessment #MaynoothUniversity #SummativeAssessment

AI-driven tools can also offer feed forward, giving students clear, actionable steps to enhance their learning journey. Let’s embrace AI to empower our students! #AIinEducation #StudentFeedback #EdTech #AI #FormativeAssessment #education

Personalized feedback can then be provided, helping each student improve and feel supported. Let's harness technology to enhance our teaching!

#EdTech #PersonalizedLearning #FormativeAssessment #education

This approach fosters deeper reflection, clearer communication, and more effective feedback. Let's support our students in meaningful ways! #FormativeAssessment #StudentEngagement #TeachingTips #education

From this list my favourites are Google Forms and Nearpod. What's yours?

#education #edtech #formativeassessment #teaching #Padlet #GoogleWorkspace #Google #AI

Best Free Formative Assessment Tools for Teachers - https://www.techlearning.com/how-to/formative-assessment-tools-and-apps

Best Free Formative Assessment Tools for Teachers

The best free formative assessment tools can help teachers track student progress and personalize learning.

TechLearningMagazine

Let's prioritize understanding over speed and empower students to reflect on their learning journey. 🌟📝

#RethinkAssessment #FormativeAssessment #ReflectionInLearning #KnowledgeOverSpeed

Interessanter Podcast von @dlf

🔍 Vieles von dem, was heute in Schulen passiert, das muss radikal auf den Prüfstand.
🎓 @bildungslandnrw ermutigt Lehrer, KI-Möglichkeiten offen zu erkunden, da KI unausweichlich in der Bildung ist.
📝 #KI kann nicht benoten
👩‍🏫 Lehrer haben eine Schlüsselrolle.
📘 Lehrer werden von Wissensvermittlern zu Lernbegleitern.
📊 #formativeassessment
🏫 Die Schule bleibt ein Ort der sozialen Interaktion und des gemeinsamen Wissensaustauschs.

https://www.deutschlandfunk.de/ki-in-der-schule-muessen-lehrer-jetzt-alles-anders-machen-dlf-b97ac4b7-100.html

Künstliche Intelligenz in Schulen: Wie ändert KI den Unterricht?

Künstliche Intelligenz in der Schule kann helfen, den Unterricht zu verbessern. Um Schülerinnen und Schüler gezielter zu fördern, müssen Lehrer jetzt umdenken.

Deutschlandfunk