AI is not inevitable. Nothing in human societies is inevitable because we design them. Healthcare can be free for the public. Books can be bought instead of bombs. Universities can be free for students, and they can even receive a stipend to live off. Don't let companies dictate the future.

Read more in section 3.2 here https://doi.org/10.5281/zenodo.17065099

@olivia Olivia, what would it mean for me to “refuse adoption” in universities when it is students who are the drivers for my courses and they are widely using AI in ways that are already forbidden?

I feel like the “resistance” and critique of inevitability talk isn’t quite connecting with my reality on the ground

@UlrikeHahn @olivia

I glanced at your work on "Science communication as collective intelligence".

I think that we need to redesign university courses so that learning is communal for the solution of a specific goal.

I think though that Olivia's points are complementary to that goal. Universities should resist AI adoption as imposed by external economic forces.

@apostolis @olivia I’ve have already redesigned both my assessments and my teaching in response to students’ AI use, but that kind of adaptation feels like it conceptually falls more into “inevitability” than “resist”

right now, what’s most valuable to me personally (given the starting point that every single student in my courses has somehow used AI, and a good proportion uses it *a lot*) is advice from other academics on how exactly they are trying to change what they do in response.

telling me “I can resist” doesn’t feel helpful in that way

@apostolis @olivia I guess a different way of putting this all is that for the multiple ways in which AI is currently negatively affecting my work, both in teaching and research, the drivers underlying the use are not ,industry forces’ in the way the quoted passage in Olivia’s post is assuming, it is the independent, voluntary action of other individuals within the system (students, other researchers)

that whole frame (industry forces) captures well what is happening in many jobs, but it doesn’t capture what is happening in mine

@apostolis @olivia the reason why this ultimately matters that pushing back against the real driver (the “organic” adoption of these tools by individuals) requires me to understand and engage with the perceived value and function these tools have for them…

…and that means trying to understand both what they can and what they can’t do. Simply declaring that these tools are garbage (“semantically meaningless random text generator”) isn’t useful for actually productively countering AI use in this configuration…(if they genuinely were meaningless random text generators I wouldn’t be faced with the negative effects in the first place).

the Fodor quote doesn’t feel like it’s aimed at that kind of understanding

@UlrikeHahn @apostolis yeah, I know many do not like many of the quotes and have trouble with my position

But yes, I do think we need to educate the students: Guest, O., Suarez, M., & van Rooij, I. (2025). Towards Critical Artificial Intelligence Literacies. Zenodo. https://doi.org/10.5281/zenodo.17786243

Also: https://www.ru.nl/en/education/education-for-professionals/overview/critical-ai-literacies-for-resisting-and-reclaiming

@olivia @apostolis I don’t have trouble with your position, Olivia. I have trouble with the fact that I don’t think the recommendations (including in the linked preprint) are connecting fully with the problem. It would be great if they were, but -from my day to day experience with how AI is up-ending science academia- they aren’t. Not because they are wrong, but because they are insufficient

so it’s important to me to figure out why they’re insufficient and what else we could/should be doing

@UlrikeHahn @apostolis ok, I'm excited to see what you come up with!

@olivia @apostolis I don’t have any solution…it all feels pretty intractable to me at the moment, so I’m mainly struggling to understand the problem

what AI is doing to publishing reform is as good an example as any (see below). There is an “industry force” at play here only in as much as there is an industry irresponsibly making available particular products.

The actual causal pathways by which AI is breaking the system involves multiple distinct actors with very different motivations (outright AI slop/fraud, malicious actors, scientists using AI for research in ways that increase productivity but still leaves them in charge), each of these is different, but they are all combining to an overall negative effect

what I don’t see is how we can solve anything (if we indeed can) without unpacking all that in detail

https://write.as/ulrikehahn/is-ai-killing-scientific-reform

Is AI killing scientific reform?

Recently I tried to post a pre-print on arXiv about what might be going wrong in debate about reasoning in LLMs. arXiv seemed a relevant ...

UlrikeHahn
@UlrikeHahn @apostolis I don't fully grasp what I did that makes one think I am against different analyses here? So each featured paper here analyses AI from a different angle pretty clearly with different actors: https://olivia.science/ai/#featuredresearch e.g. https://doi.org/10.31234/osf.io/dkrgj_v1

@olivia @apostolis I don’t think I said you are against different analyses?

the point I was trying to make is simply that what is breaking things right now is a confluence of forces and actors. If we are going to counter the destructive effects we need a systemic analysis of how these forces are interacting.

I don’t take you to be someone who would object to that in principle ;-)

I suspect what we do have disagreements on is what the relative importance of these different forces and actors are, and what’s required to push back as a result (even in principle)

@UlrikeHahn @apostolis

"Most importantly of all, resistance can and should take on many forms. Remember to rest and take care of yourself and your community. If talking to friends and colleagues is easy, then try to engage them on these issues. If it is not possible to do so, you can instead (or in addition) seek out allies online."

https://olivia.science/before/#can

We've been here before!

Parallels between AI and tobacco, and other warnings.

https://olivia.science