AI is not inevitable. Nothing in human societies is inevitable because we design them. Healthcare can be free for the public. Books can be bought instead of bombs. Universities can be free for students, and they can even receive a stipend to live off. Don't let companies dictate the future.

Read more in section 3.2 here https://doi.org/10.5281/zenodo.17065099

@olivia Olivia, what would it mean for me to “refuse adoption” in universities when it is students who are the drivers for my courses and they are widely using AI in ways that are already forbidden?

I feel like the “resistance” and critique of inevitability talk isn’t quite connecting with my reality on the ground

@UlrikeHahn @olivia

I glanced at your work on "Science communication as collective intelligence".

I think that we need to redesign university courses so that learning is communal for the solution of a specific goal.

I think though that Olivia's points are complementary to that goal. Universities should resist AI adoption as imposed by external economic forces.

@apostolis @olivia I’ve have already redesigned both my assessments and my teaching in response to students’ AI use, but that kind of adaptation feels like it conceptually falls more into “inevitability” than “resist”

right now, what’s most valuable to me personally (given the starting point that every single student in my courses has somehow used AI, and a good proportion uses it *a lot*) is advice from other academics on how exactly they are trying to change what they do in response.

telling me “I can resist” doesn’t feel helpful in that way

@apostolis @olivia I guess a different way of putting this all is that for the multiple ways in which AI is currently negatively affecting my work, both in teaching and research, the drivers underlying the use are not ,industry forces’ in the way the quoted passage in Olivia’s post is assuming, it is the independent, voluntary action of other individuals within the system (students, other researchers)

that whole frame (industry forces) captures well what is happening in many jobs, but it doesn’t capture what is happening in mine

@apostolis @olivia the reason why this ultimately matters that pushing back against the real driver (the “organic” adoption of these tools by individuals) requires me to understand and engage with the perceived value and function these tools have for them…

…and that means trying to understand both what they can and what they can’t do. Simply declaring that these tools are garbage (“semantically meaningless random text generator”) isn’t useful for actually productively countering AI use in this configuration…(if they genuinely were meaningless random text generators I wouldn’t be faced with the negative effects in the first place).

the Fodor quote doesn’t feel like it’s aimed at that kind of understanding

@UlrikeHahn @apostolis yeah, I know many do not like many of the quotes and have trouble with my position

But yes, I do think we need to educate the students: Guest, O., Suarez, M., & van Rooij, I. (2025). Towards Critical Artificial Intelligence Literacies. Zenodo. https://doi.org/10.5281/zenodo.17786243

Also: https://www.ru.nl/en/education/education-for-professionals/overview/critical-ai-literacies-for-resisting-and-reclaiming

@olivia @apostolis I don’t have trouble with your position, Olivia. I have trouble with the fact that I don’t think the recommendations (including in the linked preprint) are connecting fully with the problem. It would be great if they were, but -from my day to day experience with how AI is up-ending science academia- they aren’t. Not because they are wrong, but because they are insufficient

so it’s important to me to figure out why they’re insufficient and what else we could/should be doing

Sorry to interject my uneducated opinion , but both directions are insufficient alone.

You can look at it from both directions, top-down and bottoms-up. And both are necessary.

@UlrikeHahn @olivia

@apostolis @olivia no disagreement with that!
@UlrikeHahn @apostolis it's funny mine is seen as top down tho, but sure, both in this schema are needed — but I am not by any means at any top in any sense

@UlrikeHahn @apostolis @olivia

Wow.... interesting discussion, folks. Thank you. I'm a long way from university level experience, being an engineer in the electronic design industry for over 40 years. We've gone from one computer to share among engineers thru now to AI assistance across our individual computers. IMHO, we need to separate what AI can do from what they do. Humans, almost instinctively anthropomorphise everything. FFS... people still worship an imaginary AI in the sky and.... 1/2

@UlrikeHahn @apostolis @olivia 2/2 ... call it their god(s). I think the best resistance is to cooperate. After all, no matter how human these things can seem, they will never be more than tools. As humans, we "feel" a lot. We need to not let our feelings blind us to what these new tools can do. I'm no teacher. I've found that my method of communication doesn't do well explaining to others how to think, instead of what to think. I just know the tools we use evolve all the time....