An interview about generative AI in academic life

In this episode of the Open University Praxis Podcast, host Dr Olivia Kelly is joined by sociologist Dr Mark Carrigan, Senior Lecturer in Education at the University of Manchester and AI Fellow at the Institute for Teaching and Learning. Mark’s work has been central to understanding how digital platforms, from early social media to today’s large language models, are reshaping academic practice, identity, and community.

Together, Olivia and Mark explore the rapid rise of generative AI and its profound implications for higher education. Their conversation moves beyond the usual task‑based narratives to examine the deeper sociological issues including:

  • How AI is transforming the ‘invisible labour’ of teaching, marking and student support
  • What tensions emerge when academics use AI to cope with workload pressures while students are warned against it
  • How trust between students, staff and institutions is being reshaped by AI-mediated communication
  • Why discipline-based conversations matter for developing meaningful AL norms and policies.

Mark also reflects on the psychological impact of AI on academic work, the risks of accelerating already‑intense workloads, and the urgent need for collective, rather than individualised, responses to technological change. This is a rich, nuanced discussion for anyone interested in the future of scholarship, the ethics of AI in education, and the shifting landscape of academic life. 

Listen

#academicPractice #generativeAI #higherEducation #LLMs #scholarship #university

Episode 13: Generative AI in Academic Life with Dr Mark Carrigan

Open University Praxis Podcast · Episode

Spotify

Today's blog post asks: what do space alien dreams have to do with checking historical data? More than you might think!

https://silencesandsounds.blogspot.com/

#blog #Research #AcademicChatter #data #narrative #histodons #AcademicPractice

View of Living by the proverb: Developing as a creative teacher in higher education | Open Scholarship of Teaching and Learning

What a nice day! No research, no work, just family and friends. Friends I have not seen for too long.

And now, a shower, cuddle up to my son, going to bed and perhaps a podcast.

#AcademicPractice

Machine writing and the challenge of a joyful reflexivity

If you see the use of generative AI as being about producing entire outputs purely based on your instructions, without having to directly contribute yourself, you miss out on the multifaceted ways in which we can work with these systems as part of the writing process. Rather than substituting for our own writing, it can become interspersed with it. We write over things which generative AI has produced. We use generative AI to write over things we have produced. We rapidly find ourselves with nested hybrid passages in which automated and human outputs intermingle in complex ways. The problem isn’t keeping human-generated text free from machine-generated text. The real issue is finding ways of using these new capacities of machine generation to realize the values that lead us to write in the first place. It’s the quality of what we produce that matters more than how it is we produced it.

It remains an open question whether it should be admissible to include any machine-generated text in academic outputs. The evidence we’ve seen suggests we already have many academics using generative AI to author parts of their texts in problematic and unspeakable ways. I worry about a situation in which we have a dual consciousness with everybody explicitly stating that we shouldn’t include machine-generated text in our work, and yet a widespread recognition that many people are doing this. It might be that in these situations, they offer an excuse that they were particularly busy, or this was a one-off, or there is some other extenuating circumstance that allows for the use of machine-generated text in this particular output.

This dual consciousness is a familiar feature of professional discussions about how we use technologies which have recently entered our life worlds or how we cope with the shifting technical infrastructure through which we disseminate our work. I have been in editorial board meetings where a lunchtime conversation about the idiocy of metrics is followed by a serious exchange about how we can improve the journal’s impact factor or better publicize the improvements we have already seen. I have encountered academics who I have seen in print and/or speech talk stridently about the dangers of an attention economy infecting higher education subsequently ask with utmost seriousness in a workshop about how they increase their number of Twitter followers.

In fact, I have taken part in these conversations without feeling the cognitive dissonance which it immediately feels they should have provoked when I record the experiences in writing. It is unnervingly easy to fall into this gap between how we talk and how we act, imagining that we are taking an important stance when we criticize something while nonetheless acting in ways which actively endorse it in practice (Bacevic 2020). What matters is how we act rather than how we talk about our action or inaction. It’s not enough to claim we recognize the temptations of using GenAI to increase our productivity, if we fail to examine our actual concrete experiences of that temptation in a way liable to shape the choices we make about how to act.

I certainly understand the temptation. It’s something that I’ve experienced myself. For instance, in a recent writing project, I found myself facing an impending deadline, and despite the fact that I had, on principle, refused to use AI-generated text in my work, I was, when struggling to meet this deadline, suddenly struck by the realization that I could finish this piece and move on with my day in twenty minutes if I were to draw on ChatGPT or Claude to write it for me. The possibility that we could have an immediate resolution to the challenge, that this thing that we’re struggling with, that is making us feel incapable, could be overcome with machine assistance, is very tempting. When we’re busy, when we’re stressed, when we’re rushing, when we’re overworked, we’re likely to face these challenges as a routine part of our work and life. And the possibility that generative AI can then ride to the rescue, alleviating us from our burden, is going to be very enticing.

This is exactly why, if we are to establish norms about the scope of use of generative AI, we need to do whatever we can to ensure that they’re binding, that they’re things that we really mean, that we really want to follow, rather than things that we expect others to do in public discussion, while privately doing something else entirely, and comforting ourselves by saying that we know other people are doing the same. We need to find some way to be consistent, and we need to grapple with the real and serious problems at stake here, rather than offering superficial answers, which we think are what our colleagues want to hear. There are deep issues here, and if we fail to get to grips with them, I’m arguing that not only do we forgo the pleasures that come from writing, we are also at risk of doing fatal damage to the knowledge system over time.

It matters, therefore, what we do in those moments of temptation. It matters that we are able to talk about those temptations, that we are able to recognize that we face common professional problems, and these emerging technologies provide potentially destructive solutions to those problems. It’s only through these discussions that we are going to be able to find professional norms and standards which are adequate to the challenges on the horizon, but it’s also the only way we’re going to be able to elaborate our own reflexivity as writers, as well as the reflexivity of the writing culture within the academy, to meet these challenges. What I frame as the enjoyment of writing is how to find a joyful reflexivity, in which our relationship to the process isn’t just an exercise we methodically plod through as a matter of obligation, but rather an activity we are passionate about.

#academicPractice #digitalScholarship #generativeAI #LLMs #machineWriting #reflexivity #writing

I really enjoyed this discussion with Klaus Mundt and Michael Groves for the TELSIG Podcast. There’s a reading list attached in the YouTube comments. Here are some notes from the discussion which are my attempts to characterise the insights shared in the podcast, rather than offer my own analysis:

  • It’s now possible to primarily work in your first language inside and outside the class, thanks to the affordances of machine translation. This has been developing for a long time but it’s rapidly accelerated in recent years.
  • We’ve assumed that someone graduating from an English speaking university will develop conversational proficiency through immersion. This is no longer the case and we need to address this. However the assumption that you get better at English just by being on the campus is a flawed one, which applied to some students. It may actually facilitate immersion if students have communication support which makes it easier for them to interact with speakers of other languages.
  • This expectation has implications for the university brand, in so far as that certifying learning implies graduate outcomes perceived by employers. It’s important to distinguish these concerns, even if they’re valid, from questions of academic conduct where this isn’t explicitly stated in the learning outcomes.
  • The assumption this is an academic integrity issue urgently needs to be examined and unpacked, because it’s not a tenable assumption once you examine it.
  • Do we teach English so that students can thrive at university? Or do we teach them to thrive intellectually using the best tools available? This was a great question from the podcast.
  • Just saying to students you must not use this isn’t a tenable strategy for ealing with, particularly in a sector which is aggressively recruiting international students.
  • There are signs of staff adjusting how they mark student’s work, reducing the emphasis on grammar and vocabulary in the marking criteria. If the technology can do it, should we be giving credit for it? Virtues like ‘readability’ can be a way of preserving composition and communication skills as things which are assessed, even if we move away from grammar and vocabulary.

https://markcarrigan.net/2024/07/22/some-thoughts-on-machine-translation-in-higher-education/

#academicPractice #higherEducation #machineTranslation #universities

Some thoughts on machine translation in higher education

I really enjoyed this discussion with Klaus Mundt and Michael Groves for the TELSIG Podcast. There’s a reading list attached in the YouTube comments. Here are some notes from the discussion: …

Mark Carrigan

Dr Donna Lanclos is an anthropologist who currently researches academia, currently working in the US, UK and Ireland. You can follow at:

➡️ @DonnaLanclos

Lanclos has a blog at https://www.donnalanclos.com

#DrDonnaLanclos #DonnaLanclos #Academic #Academia #AcademicPractice #AcademicPractices #Research #Researcher #Researchers #Anthropology #Education #Humanities #Science