AI is not inevitable. Nothing in human societies is inevitable because we design them. Healthcare can be free for the public. Books can be bought instead of bombs. Universities can be free for students, and they can even receive a stipend to live off. Don't let companies dictate the future.

Read more in section 3.2 here https://doi.org/10.5281/zenodo.17065099

@olivia Olivia, what would it mean for me to “refuse adoption” in universities when it is students who are the drivers for my courses and they are widely using AI in ways that are already forbidden?

I feel like the “resistance” and critique of inevitability talk isn’t quite connecting with my reality on the ground

@UlrikeHahn @olivia

I glanced at your work on "Science communication as collective intelligence".

I think that we need to redesign university courses so that learning is communal for the solution of a specific goal.

I think though that Olivia's points are complementary to that goal. Universities should resist AI adoption as imposed by external economic forces.

@apostolis @olivia I’ve have already redesigned both my assessments and my teaching in response to students’ AI use, but that kind of adaptation feels like it conceptually falls more into “inevitability” than “resist”

right now, what’s most valuable to me personally (given the starting point that every single student in my courses has somehow used AI, and a good proportion uses it *a lot*) is advice from other academics on how exactly they are trying to change what they do in response.

telling me “I can resist” doesn’t feel helpful in that way

@apostolis @olivia I guess a different way of putting this all is that for the multiple ways in which AI is currently negatively affecting my work, both in teaching and research, the drivers underlying the use are not ,industry forces’ in the way the quoted passage in Olivia’s post is assuming, it is the independent, voluntary action of other individuals within the system (students, other researchers)

that whole frame (industry forces) captures well what is happening in many jobs, but it doesn’t capture what is happening in mine

@UlrikeHahn @apostolis how do the students know to use this software if not through industry advertising?
@olivia @apostolis are you suggesting that my resistance activity should be attempting to end industry advertising?
@olivia @apostolis what I’m trying to get at is the difference between somebody who is in a job where their line manager is telling them to use AI (I know many such people) and what is actually happening in my own academic and research environment where that isn’t happening and drivers of use are completely different
@UlrikeHahn @apostolis ok, thanks for sharing

@olivia @apostolis ok, now that we have the contrast clear between contexts in which damage is arising from someone ordering people to use AI and ones where the problems stem from individuals voluntarily adopting them (and, in fact, adopting them even in the face of explicit sanction) what form do you think “resistance” should take in the latter?

that is, what, concretely, do you think academics in my position should do?

@UlrikeHahn @apostolis sorry to zoom it out, but why are you so interested in my position over texts when it's so long form all over my website and papers? I think your university does pay AI companies for services, so yes, you can push back on that, so you are the one who is pushing a distinction I personally disagree with!
@olivia @apostolis we just crossed replies… maybe the one I just sent answers that?

@apostolis @olivia the reason why this ultimately matters that pushing back against the real driver (the “organic” adoption of these tools by individuals) requires me to understand and engage with the perceived value and function these tools have for them…

…and that means trying to understand both what they can and what they can’t do. Simply declaring that these tools are garbage (“semantically meaningless random text generator”) isn’t useful for actually productively countering AI use in this configuration…(if they genuinely were meaningless random text generators I wouldn’t be faced with the negative effects in the first place).

the Fodor quote doesn’t feel like it’s aimed at that kind of understanding

@UlrikeHahn @apostolis yeah, I know many do not like many of the quotes and have trouble with my position

But yes, I do think we need to educate the students: Guest, O., Suarez, M., & van Rooij, I. (2025). Towards Critical Artificial Intelligence Literacies. Zenodo. https://doi.org/10.5281/zenodo.17786243

Also: https://www.ru.nl/en/education/education-for-professionals/overview/critical-ai-literacies-for-resisting-and-reclaiming

@olivia @apostolis I don’t have trouble with your position, Olivia. I have trouble with the fact that I don’t think the recommendations (including in the linked preprint) are connecting fully with the problem. It would be great if they were, but -from my day to day experience with how AI is up-ending science academia- they aren’t. Not because they are wrong, but because they are insufficient

so it’s important to me to figure out why they’re insufficient and what else we could/should be doing

@UlrikeHahn @apostolis ok, I'm excited to see what you come up with!

@olivia @apostolis I don’t have any solution…it all feels pretty intractable to me at the moment, so I’m mainly struggling to understand the problem

what AI is doing to publishing reform is as good an example as any (see below). There is an “industry force” at play here only in as much as there is an industry irresponsibly making available particular products.

The actual causal pathways by which AI is breaking the system involves multiple distinct actors with very different motivations (outright AI slop/fraud, malicious actors, scientists using AI for research in ways that increase productivity but still leaves them in charge), each of these is different, but they are all combining to an overall negative effect

what I don’t see is how we can solve anything (if we indeed can) without unpacking all that in detail

https://write.as/ulrikehahn/is-ai-killing-scientific-reform

Is AI killing scientific reform?

Recently I tried to post a pre-print on arXiv about what might be going wrong in debate about reasoning in LLMs. arXiv seemed a relevant ...

UlrikeHahn
@UlrikeHahn @apostolis I don't fully grasp what I did that makes one think I am against different analyses here? So each featured paper here analyses AI from a different angle pretty clearly with different actors: https://olivia.science/ai/#featuredresearch e.g. https://doi.org/10.31234/osf.io/dkrgj_v1

@olivia @apostolis I don’t think I said you are against different analyses?

the point I was trying to make is simply that what is breaking things right now is a confluence of forces and actors. If we are going to counter the destructive effects we need a systemic analysis of how these forces are interacting.

I don’t take you to be someone who would object to that in principle ;-)

I suspect what we do have disagreements on is what the relative importance of these different forces and actors are, and what’s required to push back as a result (even in principle)

@UlrikeHahn @apostolis

"Most importantly of all, resistance can and should take on many forms. Remember to rest and take care of yourself and your community. If talking to friends and colleagues is easy, then try to engage them on these issues. If it is not possible to do so, you can instead (or in addition) seek out allies online."

https://olivia.science/before/#can

We've been here before!

Parallels between AI and tobacco, and other warnings.

https://olivia.science

Sorry to interject my uneducated opinion , but both directions are insufficient alone.

You can look at it from both directions, top-down and bottoms-up. And both are necessary.

@UlrikeHahn @olivia

@apostolis @olivia no disagreement with that!
@UlrikeHahn @apostolis it's funny mine is seen as top down tho, but sure, both in this schema are needed — but I am not by any means at any top in any sense

@UlrikeHahn @apostolis @olivia

Wow.... interesting discussion, folks. Thank you. I'm a long way from university level experience, being an engineer in the electronic design industry for over 40 years. We've gone from one computer to share among engineers thru now to AI assistance across our individual computers. IMHO, we need to separate what AI can do from what they do. Humans, almost instinctively anthropomorphise everything. FFS... people still worship an imaginary AI in the sky and.... 1/2

@UlrikeHahn @apostolis @olivia 2/2 ... call it their god(s). I think the best resistance is to cooperate. After all, no matter how human these things can seem, they will never be more than tools. As humans, we "feel" a lot. We need to not let our feelings blind us to what these new tools can do. I'm no teacher. I've found that my method of communication doesn't do well explaining to others how to think, instead of what to think. I just know the tools we use evolve all the time....
@[email protected] @[email protected] @[email protected] There is very little that could be credibly called organic adoption when it comes to AI. It is being fiercely pushed in support of multiple hundreds of billion dollar investment. People are being told repeatedly, in every channel, that AI is inevitable, is here to stay, etc. It is disingenuous to place this responsibility at the feet of students, throw up your hands, or ask someone else to tell you what to do about it. That kind of behavior from people empowered to know and do better is the problem.

@abucci @apostolis @olivia I’m going to point you toward the scare quotes around the word “organic” in my post, which are there for precisely those reasons.

I am also going to push back against the notion that I am “placing the responsibility at the feet of students”: I am simply describing the (widely documented) problem in higher education that students are using AI tools in significant volumes *even where there use is explicitly sanctioned and forbidden*.

That is the concrete problem of AI now undermining higher education. Asking what “resisting AI” is supposed to mean for me in that context seems legitimate to me, and if it’s not, Olivia (who I’ve known for a long time as an academic colleague) is more than capable of telling me that herself.

@[email protected] You stated you were pushing back against the characterization of your stance that you were laying responsibility at the feet of your students, and then immediately placed responsibility at the feet of the students! Are you really unable to see this in your own post?

@fediscience.org @[email protected] @[email protected]

@abucci @apostolis @olivia let me say this then: I find your original reply to me, someone you have never met, aggressive and inflammatory.

One of the main benefits of exchange on platforms like this, to me, lies in being able to talk things through with others whose opinion and expertise I value but who disagree with me - that allows me to learn things and clarify my thoughts, and I’ve found this exchange with Olivia really helpful in that regard.

Trying to navigate disagreement in a way that it doesn’t lead to conflict is incredibly hard. In a context like this thread where people are investing significant effort in trying to navigate disagreement in a constructive way, I don’t personally have time, energy, or interest in exchanges with people who aren’t making that effort. The world is fraught enough as it is.

@[email protected]
I find your original reply to me, someone you have never met, aggressive and inflammatory.
Tone policing on a platform where it is well-known to be nearly impossible to read tone successfully is both aggressive and inflammatory, and frankly I find it needlessly dismissive. It seems you've forgotten that we have interacted on the fediverse before? In any case, we're agreed there is nothing constructive to be found in interacting any longer, so I am making the decision to block you now. Good luck.

@[email protected] @[email protected]
@abucci @UlrikeHahn @apostolis @olivia I mean, it is a fact that students are massively relying on AI in a way that is impacting education. One can wonder about the causes or what to do about it, but merely stating that fact is not putting any responsibility on anyone.

@abucci @UlrikeHahn @apostolis @olivia

Wouldn't an approach where the AIs have to pass the class as students, be better. After all, regurgitating data is not the way to learn how to think. As for the pollitical/economics of the whole mess, well, that's on us to some extent. It's a problem educated people deal with all the time, even among each other. IMHO, humanity is still growing up. We've not abandoned our superstitions for the hard real wonder of actual nature. Is AI part of our nature?

@lednaBM @abucci @apostolis @olivia if I understand you correctly, you are suggesting we, in an sense, embrace AI and treat it in such a way that makes it better (ie accept it as students)? if yes, I don’t personally really want to make AI systems ‘better’ - they are causing huge damage and disruption at current levels of performance. I’d personally rather put a brake on that.

@UlrikeHahn @abucci @apostolis @olivia

I understand what you're saying, and maybe language is not serving us well. You seemed to have juxtaposed helping it be better versus creating a disruption. And again, maybe I fully don't understand the dilemma. When I need very technical information that I can not recall or need help with, I would go to a book or a specification. Now, I can ask AI, check its results, and decide whether I can rely upon what's being presented. It's a tool... 1/2

@UlrikeHahn @abucci @apostolis @olivia 2/2 tools generally need calibration. Is it possible to use the disruption you speak of as a teaching moment? I don't know. Am I being foolish about the political/economic consequences of those benefitting from the disruption? Maybe. I agree with the original poster. We should have free education, health care, and representation in the way we govern ourselves. Problem there is, IMHO, the white elephant that is religion working against secular human values..
@lednaBM @abucci @apostolis @olivia I think one of the problems, particularly in the context of education, lies in the idea that “now I can use AI to give me an answer and check the results”. It is precisely the “ability to check the results” in a particular scientific or academic discipline that higher education degrees are trying to provide. Leaning on AI to “find” answers by students is undermining the learning of the skills that underpin “the ability to check”.
@UlrikeHahn @abucci @apostolis @olivia
That's a great point. Teaching youth only to rely upon AI sounds like a mistake. I guess I have trouble with the notion that AI is anything more than a tool. Its applications threaten a lot, probably a lot beyond its scope, but not beyond its profit scam. Hopefully, some applications are identified as misapplications. I'm reminded of Huxleys Brave New World. Will AI be the soma drug to placate the masses, even though they were designed to be placated.

@lednaBM @UlrikeHahn @abucci @apostolis @olivia At the risk of butting into this conversation, I think the problem here is that you think that "just a tool" is a neutral concept.

Tools, by their very nature, change the way we interact with the world. Cars are "just a tool", but dependence on cars for transport has both positive and negative effects, because of how their use changes how we behave (and what other things we want to change about the world now "we" want to use cars all the time). Is "car-using humanity" healthier than "pre-car humanity"?

In this sense, even if "AI is just a tool", the existence of cognitive tools *clearly* implies that use of them will change the way people behave - *regardless* of any concept of "applications being identified as misapplications". Dependence on a tool for *thinking* feels inherently more problematic than dependence on a tool for travelling distances...

@[email protected] @[email protected] @[email protected] @[email protected] @[email protected] The "just a tool" framing also does a great deal of heavy lifting for the political project that AI represents and forwards. What saddens me most is that this project is nearly transparent, its actors almost totally honest about what they are attempting to accomplish even as they dissemble about it. Yet we go around and around in circles about whether these things are "just" tools, or wring our hands about what to do about students using them, or waffle about whether the tools are useful or have this or that impact on productivity. These things are symptoms, not causes.

@abucci

They even tried "nuclear weapons are just tools", with Project Plowshare, saying we could use them for excavating canals or mining.

Boys with toys, or corporations hoping to get contracts, IDK, though at least most people knew this was ridiculous.

https://en.wikipedia.org/wiki/Project_Plowshare

@aoanla @lednaBM @UlrikeHahn @apostolis @olivia

Project Plowshare - Wikipedia

Respectfully @EricLawton

I understand your sentiment, but people were not as knowledgeable about radiation back then. So yes, these things were boasted about in ignorance. And yes, we see it today with the fools (Musk) of our time talking about colonizing Mars, not even thinking about how radioactive it is. Foolish talk is foolish talk in any century. This is why we have scientists and researchers to sort the how's, while engineers apply them.
@abucci @aoanla @UlrikeHahn @apostolis @olivia

@lednaBM

The images from Hiroshima and Nagasaki should have been more than enough.

We already knew about strontium-90 in milk, from fallout from tests. https://pmc.ncbi.nlm.nih.gov/articles/PMC2134381/

Not being knowledgeable owed more to H.L. Menken's principle than the knowledge not being available.

“It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”

@abucci @aoanla @UlrikeHahn @apostolis @olivia

Strontium 90 in dairy milk produced in central Europe during 1957-1960

PubMed Central (PMC)
@EricLawton @lednaBM @abucci @aoanla @UlrikeHahn @apostolis also see the top linked EU study on late lessons here https://olivia.science/before/#what
We've been here before!

Parallels between AI and tobacco, and other warnings.

https://olivia.science
@EricLawton @lednaBM @abucci @aoanla @UlrikeHahn @apostolis also pay extra attention to the parts where they discuss the precautionary principle "not knowing" especially at government levels is not an excuse
@EricLawton @lednaBM @abucci @aoanla @UlrikeHahn @apostolis I'm going to mute this conversation because it's not something I can realistically follow (sorry) if you need me please bear that in mind 🫡

@olivia

Indeed.
The Precautionary Principle is especially important here.

Which is why the extreme inertia in government regulation response is so frustrating; they're more FOMO than PP.

@lednaBM @abucci @aoanla @UlrikeHahn @apostolis

@[email protected]
people were not as knowledgeable about radiation back then. So yes, these things were boasted about in ignorance.
You're trying to say that in 1961 people were ignorant of the impacts of nuclear radiation and acted out of ignorance? This does not pass a smell test, given that by that point nuclear radiation had been well-studied for over 60 years by large numbers of what were considered top scientists of their day. A very strange thing to assert.

Calling someone like Elon Musk a fool does work for him: you can't hold a fool accountable for doing things they lack the capacity to understand. Why are you invested in doing this work on his behalf? From where I stand I think he very much has the capacity to understand what he's up to.

@[email protected] @[email protected] @[email protected] @[email protected]

@abucci Your arguments are unfair. Ask Madame Curie. Applying the 60s, your date, ignores the nuclear testing of the 50s. How convenient. I'll stand by my statement. And then you go on to accuse me of helping EM? Really! Seems more like a desperate attempt at a social media hit job. Please refrain from commenting if you're here to just pick a fight. Bullies exist in every crowd. Bullying reflects more on you than I. Delete your post.....

Apologies to:
@aoanla @apostolis @olivia @EricLawton

@[email protected] Just to clarify: I took your post to be suggesting that people in the 1960s did not know much about the perils of radiation, and that this led to "boasting in ignorance" about what turned out to be dangerous technology. If that is not what you intended to argue, then I misread. If that is what you meant, then I stand by what I said: Curie started her work in the late 19th century and over 60 years had passed by the 1960s. Radiation and its perils were quite well understood by then.

I 100% stand by my assertion that calling someone like Elon Musk "a fool" makes space for him to continue to do what I fully believe are deliberate, conscious, and well understood harmful acts. Words like "fool", "imbecile", or "moron" have a history, and have been used to designate a group of people unable to be held responsible for their actions because they lack the cognitive capacity to understand them. Not everybody has that understanding of these words, but regardless they are inappropriate and distract from the more pressing reality that deliberate harmful acts are intolerable in a free society. That's the sense in which I meant one is doing Musk's work for him by calling him a fool.

If you characterize all the above as a "social media hit job", when it's meant and has the shape of a good faith discussion, then I don't know what to tell you.

@[email protected]
@[email protected]
Bullying reflects more on you than I. Delete your post.....
How is issuing a demand like "delete your post" not a form of bullying?

I will not delete my post, and I believe that what you are doing here is a form of projection. This conversation is over.
@[email protected] @[email protected] @[email protected] @[email protected] The fact that you can selectively ignore the strings of a marionette does not mean it is alive, part of our nature, or able to attend and pass a course. I suspect this is even obvious to AI!

@lednaBM @abucci @UlrikeHahn @apostolis @olivia

The tacit assumption here is that LLMs possess intelligence is false. Their purpose is not to give intelligent answers. Their purpose is surveillance.

AGI = Automated Gathering of Intel

@teledyn @lednaBM @abucci @apostolis @olivia I think the “intelligence” issue is a red herring, personally

in the contexts I’m concerned with, people’s use is driven by the practical value they find in the actual outputs

(I also don’t personally see anyone in this thread that has been assuming that)

@UlrikeHahn @lednaBM @abucci @apostolis @olivia

We have already seen arrests, and then the shooting in BC, all US based are required to retain, and the #ELIZAeffect.

So carry on. Don't mind me. Enjoy.