Universities have already been transformed by generative AI

This Atlantic piece by Ian Bogost argues what I was trying to articulate earlier in the summer. This is how I put it at the time:

This means that universities need to treat generative AI as something that has happened. Not something that is happening or will happen. It’s not a change to prepare for or a tide we can hold back but rather a feature of our organisations that we need to understand and steer in constructive rather than destructive directions. My perception is that a surprisingly large number of academics are still locked into this sense that we’re in the early stages of a change, rather than coping with a shift that has already happened. We saw from yesterday’s deeply incremental update to GPT 5 how significant growth in capacities of the frontier models are plateauing. The innovation we’ll see in the next couple of years will be at the level of software design and affordances enabled by engineering optimisation rather than a fundamental leap in what models can do.

This is how Bogost makes the argument:

Three years later, the AI transformation is just about complete. By the spring of 2024, almost two-thirds of Harvard undergrads were drawing on the tool at least once a week. In a British survey of full-time undergraduates from December, 92 percent reported using AI in some fashion. Forty percent agreed that “content created by generative AI would get a good grade in my subject,” and nearly one in five admitted that they’ve tested that idea directly, by using AI to complete their assignments. Such numbers will only rise in the year ahead.

Where I disagree with him is that the transformation is complete. In fact I think this is quite a dangerous framing for a number of reasons. Firstly, there’s a lack of clarity about what is and isn’t acceptable use of generative AI. For example the HEPI 2025 research found that no use of LLMs received over 2/3 endorsement as legitimate by students. Secondly, we’re seeing a move from prompting-intensive to prompting-light approaches to models which is hugely significant. The cognitive labour involved in using LLMs effectively is rapidly shrinking. Thirdly, post-training and software design are going to take over from model upgrades as the driving force of competition, which means that new functionality is going to emerge in unpredictable ways. Consider how distinct NotebookLM is in relation to ChatGPT and multiply that a few times over.

In this I would argue that LLMs have become ubiquitous without being normalised. What we can expect now are normalising pressures, as it becomes increasingly untenable to imagine either that we can critique models out of existence or prohibit their use in any sort of straightforward ways. I can hear some readers groan at this ‘AI realism’ but I’ve been saying for three years that the diffusion of models, marketed directly to consumers by the most powerful companies in history whose financial fate depends on keeping this bubble inflated, that academics weren’t going to be able to stop the spread of LLMs. Yes there’s a risk of self-fulfilling prophecy if you start from a position of defeatism but I also think it was always an accurate empirical assessment of the balance of power involved in the process we are talking about. Honestly I’m also not sure we should have stopped it even if we could, even if I would have felt conflicted about that. For example imagine if Wikipedia had been invented by Meta as a commercial product selling subscriptions to universities. I would have thought the educational possibilities outweigh concerns about the model of commercialisation, though I would have been conflicted about it. It’s far from a perfect analogy but it’s an interesting thought experiment.

The place where I entirely agree with Bogost is that we need to respond to the rapid normalisation of LLMs amongst our students. This has not yet happened and unless we find a way to engage with them constructively and proactively about the everyday reality of model use, we are going to lose any capacity to steer and influence this normalisation:

“I cannot think that in this day and age that there is a student who is not using it,” Vasilis Theoharakis, a strategic-marketing professor at the Cranfield School of Management who has done research on AI in the classroom, told me. That’s what I’m seeing in the classes that I teach and hearing from the students at my school: The technology is no longer just a curiosity or a way to cheat; it is a habit, as ubiquitous on campus as eating processed foods or scrolling social media. In the coming fall semester, this new reality will be undeniable. Higher education has been changed forever in the span of a single undergraduate career.

If we’re concerned about how students are using LLMs, we need to ask why they are inclined to use them in that way. What is it about the context, particularly their context as the particular kind of student they are, which inclines them to this use? We also need to open the black box of practice, as I’ve been putting it in recent talks, in order to recognise the sheer variety of ways in which students are using LLMs. The evidence suggests that submitting entirely LLM-generated text for assignments is far from a widespread practice. But it is growing, rather inevitably, as the use of LLMs more widely grows. We need to find a way that to intervene in how students are thinking practically about their use of models, which the weird combination of censoriousness and empirical disinterest which has been dominant heretofore renders pretty much impossible. As Bogost points out, we have to help our students grapple with the temptation for outsourcing which LLMs offer:

And like the other students I spoke with, he’s often in a rush. Wynter is a double major in educational studies and American-culture studies; he has also served as president of the Association of Black Students, and been a member of a student union and various other campus committees. Those roles sometimes feel more urgent than his classwork, he explained. If he does not attend to them, events won’t take place. “I really want to polish up all my skills and intellect during college,” he said. Even as he knows that AI can’t do the work as well, or in a way that will help him learn, “it’s always in the back of my mind: Well, AI can get this done in five seconds.”

This perfectly captures why I’m so worried about the coming year, given the evidence we saw a growth of use from small majority to near total in UK HE over the last academic. We won’t just see a continued expansion of use, we’ll see an intensification of use as existing students find new ways of using the LLMs in their work:

But my recent interviews with colleagues have led me to believe that, on the whole, faculty simply fail to grasp the immediacy of the problem. Many seem unaware of how utterly normal AI has become for students. For them, the coming year could provide a painful revelation.

There’s an obvious solution to the assessment challenges, as my colleague Drew Whitworth long ago persuaded me, which is to switch to processual forms of assessment which decentre or dispense with the outcome-centric modes which are necessarily vulnerable to software which produces outcomes in response to natural language requests. The problem is that doing processual assessment at scale is near impossible. I ran a 140 person unit with Drew last year and process only works through digital mediation, which in turn means that these become like micro-outputs which can in some cases be produced using LLMs. The solution isn’t available because of the scale on which we’re forced to teach and learn, within the contemporary political economy of higher education. I like Bogost’s concluding argument that there’s a huge redesign exercise coming and the sooner we start, the sooner we get it over with:

The existence of these stressors puts higher ed at greater risk from AI. Now professors find themselves with even more demands than they anticipated and fewer ways to get them done. The best, and perhaps the only, way out of AI’s college takeover would be to embark on a redesign of classroom practice. But with so many other things to worry about, who has the time? In this way, professors face the same challenge as their students in the year ahead: A college education will be what they make of it too. At some point, everyone on campus will have to do the work.

But I also think it’s wrong, at least in the UK context. The problem isn’t LLMs. The problem is the chaotic way in which LLMs are diffusing, coupled with a system already being stretched towards breaking point. Redesign can only mitigate the problems because ultimately without a different funding models, we’ll be working with staff:student ratios that can only compel automation rather than provide an occasion for human-centred design.

#AI #assessment #assessmentIntegrity #higherEducation #ianBogost #learning #LLMs #pedagogy #students

Are UK universities ready to cope with generative AI in the 25/26 academic year?

In a month we’ll enter the second full academic year in which large language models (LLMs) have been a routine feature of staff and student practice within universities. While their uptake wa…

Mark Carrigan

"The terms social network and social media are used interchangeably now, but they shouldn’t be. A social network is an idle, inactive system—a Rolodex of contacts, a notebook of sales targets, a yearbook of possible soul mates. But social media is active—hyperactive, really—spewing material across those networks instead of leaving them alone until needed."

#IanBogost, 2022

https://www.theatlantic.com/technology/archive/2022/11/twitter-facebook-social-media-decline/672074/

#SocialMedia #SocialNetworking

The Age of Social Media Is Ending

It never should have begun.

The Atlantic

In the dim

Why? If I’d asked them, they would probably have said: to reduce distractions and improve focus. Programming a computer is a bit like repairing a very tiny machine with precision tools while looking under a microscope. Quiet and calm help facilitate that process. Programmers may also just prefer the dark.

~ Ian Bogost, from We’re All in ‘Dark Mode’ Now

slip:4utete7.

Hey look, “quiet and calm” has the literal calm of calm technology. Bright, flashing lights are preceded by trigger warnings for a reason. I’ve been cultivating warm-toned lighting, and earth tones, in my working spaces for a long time. I cut my teeth on the Internet with VT-100 terminals, green type on black, cathode ray tubes and “screen burn-in” was a real hazard. These days a lot of my screens have ‘paper-white’ backgrounds with the black text. It’s been nice to watch the world catch up over the last few decades.

ɕ

#CalmTechnology #IanBogost

Craig Constantine

Caution: Blogging. Randomly.

Craig Constantine

The irony of this is the fash are convinced that universities are brainwashing kids into developing #CriticalThinking skills, exposing them to heterogenous viewpoints and diverse people, all of which triggers their latent authoritarianism.

“A lot of colleges and universities are at the point now where they have to stop being what they are. And have to start being something else.” - #IanBogost https://www.theatlantic.com/technology/archive/2024/01/dei-universities-are-broken/677288/
https://mastodon.world/@StillIRise1963/112320981892640949

The Real Problem With American Universities

It isn’t DEI.

The Atlantic

#TechnoNarcissism is the unstated conclusion of #IanBogost’s piece in @TheAtlantic about the #CompSci problem in universities.

Need more #humanities blended in to foment #CriticalThinking in an ecosystem over-optimized for #STEM as a result of the economy and digitalization.

https://www.theatlantic.com/technology/archive/2024/03/computing-college-cs-majors/677792/

Universities Have a Computer-Science Problem

The case for teaching coders to speak French

The Atlantic

@ultranurd

“Universities are #conservative #institutions, steeped in #tradition.”

🔥by #IanBogost

In this fascinating article from 2014 on the #Darmok episode, #IanBogost argues that the #Tamarian’s don’t speak in metaphors, but detailed procedural allegories. So, kinda like, “Sulu, at the helm when the Klingons attacked at Organia” would be, “shields up, break orbit, maximum warp.” #StarTrek

https://www.theatlantic.com/entertainment/archive/2014/06/star-trek-tng-and-the-limits-of-language-shaka-when-the-walls-fell/372107/

Shaka, When the Walls Fell

In one fascinating episode, <em>Star Trek: The Next Generation</em> traced the limits of human communication as we know it—and suggested a new, truer way of talking about the universe.

The Atlantic

My son’s now old enough to get ‘loyalty cards’ for supermarkets, coffee shops, and places to eat. He thinks this is great: free drinks! money off vouchers! What’s not to like? On a recent car journey, I explained why the only loyalty card I use is the one for the Co-op, and introduced him to the murky world of data brokers.
In this article, Ian […]

https://thoughtshrapnel.com/2023/09/15/the-supermarket-is-a-panopticon/

The supermarket is a panopticon

My son's now old enough to get 'loyalty cards' for supermarkets, coffee shops, and places to eat. He thinks this is great: free drinks! money off vouchers! What's not to like? On a recent car journey, I explained why the only loyalty card I use is the one for the Co-op, and introduced him to the mur

Doug Belshaw's Thought Shrapnel

Dunbar was an anthropologist, not a neuroanatomy researcher, and Dunbar's Number was an ethnographical observation, not a biological one.

"Your social life has a biological limit: 150. That’s the number—Dunbar’s number, proposed by the British psychologist Robin Dunbar three decades ago—of people with whom you can have meaningful relationships."

#IanBogost, 2021

https://www.theatlantic.com/technology/archive/2021/10/fix-facebook-making-it-more-like-google/620456/

That's your opening? Not off too a great start there Ian.

#DunbarsNumber

Fix Facebook by Making It More Like Google+

Breaking up social-media companies is one way to fix them. Shutting their users up is a better one.

The Atlantic

"Now, underneath the friendly and familiar blue icon with a white bird, that letter alone was displayed—X—as if my iPhone was affirming that Elon Musk’s Twitter had become an error."

#IanBogost

https://www.theatlantic.com/technology/archive/2023/07/twitter-x-rebrand-juvenile-internet-style/674875/

The Ugly Honesty of Elon Musk’s Twitter Rebrand

The platform’s new logo seems a little juvenile. So does the internet.

The Atlantic