Are UK universities ready to cope with generative AI in the 25/26 academic year?

In a month we’ll enter the second full academic year in which large language models (LLMs) have been a routine feature of staff and student practice within universities. While their uptake was originally driven by a sense of novelty, there’s increasing evidence LLMs are now an ingrained feature of life for a growing user base. OpenAI claim ChatGPT is at 700 million active weekly users. There were 1.7 billion downloads of GenAI apps in the first half of 2025. There’s a clear trend of users spending more time on the app, including at the weekends. It’s therefore unsurprising that HEPI found 92% of undergraduate students (n=1041) using generative AI in some form, with significant growth from the 2024 survey:

This includes 88% using it in assessments in some form:

As someone who ran a large PGT programme at a Russell Group for the last few years, I suspect the numbers are higher still for international PGTs, particularly if we include translation software in the category of ‘generative AI’. Interestingly they found a “digital divide based on socio-economic grade” in which “Some functions are used much more by students from higher socio-economic groups (A, B and C1), including summarising articles, structuring thoughts and using AI edited text in assessments”

This means that universities need to treat generative AI as something that has happened. Not something that is happening or will happen. It’s not a change to prepare for or a tide we can hold back but rather a feature of our organisations that we need to understand and steer in constructive rather than destructive directions. My perception is that a surprisingly large number of academics are still locked into this sense that we’re in the early stages of a change, rather than coping with a shift that has already happened. We saw from yesterday’s deeply incremental update to GPT 5 how significant growth in capacities of the frontier models are plateauing. The innovation we’ll see in the next couple of years will be at the level of software design and affordances enabled by engineering optimisation rather than a fundamental leap in what models can do.

The first wave of innovation has happened and it’s time for universities to get to grips with it. This isn’t just recognising what models can do, which has largely been recognised at this stage. But also how widely these models are being used to do such things across the university system. These are mainstream tools which are being used by an overwhelming majority of students and I suspect a (small) majority of academic staff. There’s an urgent need to grapple with the implications of this in a practical mode rather than speculatively arguing about what this all means for teaching, learning and research.

When it comes to student use of LLMs this means shifting from questions of whether students are using models to why and how they are using them. It’s only if we’re dealing with specific uses in real world contexts that we can have meaningful debates about what is acceptable practice. The HEPI research indicates significant uncertainty amongst students about what is acceptable, as evidenced by the fact none of these activities receive more than 2/3 endorsement as acceptable:

Consider Anthropic’s research into how university students are using Claude which shows significant variation between higher-order and lower-order skills. My concern at the moment is that the diffuse nature of AI policy in universities (i.e. general principles which lack scaffolding to support practical reasoning on the ground) mean we often talk about ‘AI use’ as if these activities are interchangeable:

https://markcarrigan.net/wp-content/uploads/2025/05/image-9.png

Clearly they are not. I struggle to think of any circumstances in which using LLMs to ‘explain concepts’ is problematic (assuming a baseline level of capacity to use the model) whereas pretty much all circumstances I can imagine for ‘use in assessment without editing’ seem problematic to me. Many academics seem to imagine most, if not all, use of LLMs falls into the latter categories which means conversations with students will lack recognition of the varied ways in which students are using models. We have a language for talking about these issues which is ready to hand:

  • Bloom’s Taxonomy: creating, evaluating, analyzing, applying, understanding, remembering.
  • Well-documented examples of student LLM use: explaining concepts, summarising a relevant article, suggesting research ideas, structuring thoughts, use in assessment after editing, use in assessment without editing.

We urgently need to talk with students in specific terms about educational LLM practices which can be understood in the established categories of Bloom‘s Taxonomy. These conversations need to recognise the diversity of motivations which students have for using LLMs, illustrated here by the HEPI research again:

This is what I mean by a focus on what students are doing with LLMs (educational LLM practices) and why they are doing it (student motivations for LLM practices). The conceptual architecture of Bloom’s Taxonomy then helps us understand the implications of these practices for teaching and learning, in ways informed by basic AI literacy e.g. we should advise students to avoid using LLMs for remembering, not because it’s inherently wrong to do so but because models aren’t databases for factual recall. In this sense I’m suggesting a number of elements in an adequate response:

  • Direct conversations with students in terms of educational LLM practices and student motivations for LLMs practices [there’s a huge problem here of creating environments in which students feel comfortable sharing all aspects of their practice]
  • Equipping teaching staff to better understand the range of educational LLM practices and student motivations for LLMs practices
  • Reporting on the perception of teaching teams about educational LLM practices and student motivations for LLMs practices
  • Conversation at the level of teaching teams, departments and schools about the implications of these practices for teaching and learning

This picture is evolving too rapidly for the established repertoires of educational research. But equally we don’t need a perfect record as much as a working understanding. If this is enacted dialogically, creating spaces for students and staff to have these conversations, it contributes in itself to a more reflective culture around LLM-use in universities which overcomes some of these debates. Ideally this takes place at the level of teaching teams or departments for two reasons:

  • There will be substantial variation in terms of educational LLM practices and student motivations for LLMs practices across cohorts. Furthermore my experience of one year PGT cohorts is there can be substantial differences in cohorts from one year to the next. These conversations need to be as close to practice as possible in order to ensure their relevance.
  • The implications of these practices for teaching and learning will vary significantly across disciplines. This should also be a question which is answered collegially in terms of the prevailing norms and standards within discipline areas.
  • Until we can create these spaces for reflective dialogue I don’t think universities are prepared for the 25/26 academic year. At present we have a chaotic landscape of individual practice without clear norms uniting staff and students, while policy making has a reactive character in spite of attempts to articulate principles and practices. I wrote two years ago that universities were organised in a way that left a gap between policy and practice which would be fatal with LLMs:

    In siloed and centralised universities there is a recurrent problem of a distance from practice, where policies are formulated and procedures developed with too little awareness of on the ground realities. When the use cases of generative AI and the problems it generates are being discovered on a daily basis, we urgently need mechanisms to identify and filter these issues from across the university in order to respond in a way which escapes the established time horizons of the teaching and learning bureaucracy.

    In practice this means that individuals and teams confront pedagogical situations in which there’s a lack of clarity about what university strategy and rules mean in practice. Here, now, with these students in this room: what should I be doing? My suggestion is that addressing this problem needs mechanisms to:

  • Describe the situation on the ground in careful and precise ways: educational LLM practices and student motivations for LLMs practices
  • Engage in reflective dialogues about the implications of these practices for teaching and learning
  • Feed up the situation on the ground, as well as how colleagues reflect on it, in a way that can inform central university policy making
  • Cascade shifts in university policy in a translational manner which speaks directly to this situation on the ground
  • Until we build this missing link I think “institutional responses will actually amplify the problems by communicating expectations that are incongruous with a rapidly evolving situation”, as I put it two years ago. The challenge is how to build this link in a way that is consistent with intensified workloads and an increasingly generalised crisis within the sector. This means it has to be lightweight enough to work within existing structures, specific enough to address real practices rather than abstract principles, and iterative enough to evolve as rapidly as the situation on the ground. It also has to enable knowledge and perspectives to be shared across disciplines while retaining their disciplinary specificity. It also probably needs to be asynchronous to a large extent, up to the point where it doesn’t hurt the quality of the dialogue.

    #BloomSTaxonomy #ChatGPT #education #generativeAI #higherEducation #learning #LLMs #pedagogy #strategy

    How are students using Generative AI in UK universities?

    Honestly I’m not sure how worried we should be about these findings from HEPI (n=1,041) given it seems the sector has got passed its initial inclination to try and prohibit. If we’re in a situation where only 12% of students are not using LLMs in their assessment then what matters is steering use towards epistemic agency* and way from LLMs supporting a turbo-charged transactional engagement with knowledge.

    It’s interesting to contrast these findings with Anthropic’s study of university students using Claude, classified in terms of Bloom’s taxonomy:

    The dynamics of cognitive outsourcing (and potential lock-in) differ as you move up from lower to higher-order thinking skills for students. I struggle to see a problem with students using LLMs to support understanding materials, much as I struggle to see a problem with academics using LLMs to produce materials which are easier to understand. Sure we might rapidly end up in a situation where this learning interaction is mediated by LLMs by default but I don’t see a fundamental difference in type from that being mediated by other kinds of digital platforms (e.g. the LMS) or outputs (e.g. Powerpoint). It’s a case of better or worse design rather than something human being lost through the introduction of a technological element.

    I think applying and analysing by definition lend themselves to agentive engagements with knowledge. You can’t get the LLM to do something useful unless you’re thinking about what you’re asking, which means to at least some extent an epistemic capacity is being exercised. Certainly students could try and fail to do this, but that’s a different kind of problem to be addressed through the register of AI literacy. The pedagogical challenge comes in recognising how students are doing this in order to design learning processes which support increasingly purposive applications rather than just assuming they will be learning in the same way we did.

    It’s evaluating and creating where it gets more concerning. If you’ve already developed these capabilities LLMs can be used to speed up the process (though a soft lock-in might result over time) or enhance the process in the activity I describe as rubber ducking. The problem arises if you haven’t learned how to do this without the LLM, such that the composite capacity (e.g. writing a report) develops in a way that has the LLM baked into it from the outset. For example reliance on LLMs for an outline only concerns me if students haven’t learned to do this without the LLM in the first place. To rely on it to critically evaluate your work and suggest room for improvement carries a similar risk of cognitive outsourcing which is unlikely to be addressed after university by most students.

    This is a long-winded way of saying that we urgently need to get beyond the category of ‘AI’ in how we think about these pedagogical challenges. The relationality within the LLM becomes more important to recognise the further up the taxonomy we go. Exactly what ‘creating’ means can now vary immensely depending on the pattern of interaction the student has with the LLM.

    It’s also interesting to see that:

    • The main factors putting students off using AI are being accused of cheating (said by 53% of respondents) and getting false results or ‘hallucinations’ (51%). Just 15% are put off by the environmental impact of AI tools.
    • Students still generally believe their institutions have responded effectively to concerns over academic integrity, with 80% saying their institution’s policy is ‘clear’ and three-quarters (76%) saying their institution would spot the use of AI in assessments
    • The proportion saying university staff are ‘well-equipped’ to work with AI has jumped from 18% in 2024 to 42% in 2025.

    I think students are over-estimating how effectively institutions can identify (and act!) on problematic LLM use and over-estimating the AI literacy of academic staff. If I’m right and student perception catches up to that reality, could ‘cheating’ as an inhibiting factor start to collapse from that figure of 51%?

    *Thanks to my collaborator Peter Kahn for introducing me to this notion

    #assessmentIntegrity #BloomSTaxonomy #cheating #higherEducation #learning #LLMs #pedagogy

    Bloom’s Taxonomy Revised guides the Education for Life Program, integrating cognitive, affective, and psychomotor learning for a holistic approach. It supports open-source, adaptable education through curriculum, teaching strategies, learning tools, and classroom design for lifelong learning.

    https://onecommunityglobal.org/blooms-taxonomy/

    #BloomsTaxonomy #EducationForLife #HolisticLearning #OpenSource #LifelongLearning #TeachingStrategies #CognitiveDevelopment #LearningTools #InnovativeEducation

    Bloom's Taxonomy Revised: An Open Source Resource Guide and Collaboration

    In doing research on Bloom's Taxonomy we found a large body of people seeking a Bloom's Taxonomy revised approach to learning. The Bloom's Taxonomy education philosophy is summarized here into is components of Leadership, Teaching, and Communicating; Curriculum Ideas; Teaching Strategies; Learning Tools and Toys; and Classroom Design.

    One Community Global Nonprofit
    @schizanon Two of the higher tiers of #BloomsTaxonomy are “analysis” and “evaluation”, so a thoughtful educator could definitely incorporate LLM AI output into their curriculum to their students’ advantage. You may have been posting with a degree of snark, but there’s definitely something to the idea.

    Hugh McLeod’s original cartoon of Information vs Knowledge which was later extended by David Somerville is actually a very solid representation of much of what many sensemaking workflows look like including the process of making and maintaining a Zettelkasten for writing. It could also be an active representation of Bloom’s Taxonomy.

    h/t Nick Santalucia

    #blooms-taxonomy #data #gapingvoid #hugh-mcleod #information #wisdom #zettelkasten

    https://boffosocko.com/2023/05/31/cartoon-outline-of-the-zettelkasten-process/

    Blooms Taxonomy and digitalization go hand in hand! 🚀 As we embrace #EdTech, we empower students to reach higher cognitive levels 🧠💡 From remembering to creating, online tools foster critical thinking, collaboration, and innovation 🌟 Let's revolutionize education together! 🎓 #BloomsTaxonomy #DigitalTransformation
    Tweet / Twitter

    Twitter

    I just posted a significant bit of analysis for #SentientSyllabus, a proof of concept for personalized assignments.

    https://sentientsyllabus.substack.com/p/assignment-and-agency

    In the eighties, Benjamin Bloom (the Bloom of #Bloomstaxonomy) reported that one on one instruction can boost #studentperformance by two sigma! Ever since then, we have been searching for ways to scale this for our current realities in #education - but no breakthrough has appeared. Why are we not teaching this way if the results are so compelling? Because, as Bloom said: "it is too costly for most societies to bear on a large scale".

    With the arrival of #generativeAI that limit will change.

    I have worked out a proof of concept for personalized #assignment design that needs only a spreadsheet and ChatGPT. The spreadsheet builds a prompt that students can customize, ChatGPT writes out the assignment. No specialized software, technology, or third party involvement are needed.

    Of course, the results need to be vetted - but the improvement becomes part of the learning process, and overall the process hands agency for learning back to the student: the assignment becomes theirs.

    The proof of concept is done from the perspective of a #ComputationalBiology course I taught last year - but adapting it to other fields of #higherEd should be trivial. There is nothing inherently #STEM like in the approach - #humanities, #writing, #languageLearning ... there is no reason why this would not also work for other levels of education.

    The potential is remarkable.

    I encourage you to boost and share - this will be valuable for educators at all levels, and it will give us very concrete ways to harness the new opportunities. The key is #Agency .

    :-)

    Assignment and Agency

    Update on updates

    Sentient Syllabus

    @SBMost

    Also important to learn about Bloom's taxonomy, but understand the way that it has been misused by people selling learning packages. I spent a couple of years looking up the articles, research. We can summarize it as "The good, the bad, and the ugly" LOL

    Instead of that traditional pyramid, there's some useful alternative ideas and graphics here

    https://kaiserscience.wordpress.com/2018/10/19/blooms-taxonomy/

    #Bloomstaxonomy

    Bloom’s Taxonomy

    KaiserScience