AI is not inevitable. Nothing in human societies is inevitable because we design them. Healthcare can be free for the public. Books can be bought instead of bombs. Universities can be free for students, and they can even receive a stipend to live off. Don't let companies dictate the future.

Read more in section 3.2 here https://doi.org/10.5281/zenodo.17065099

@olivia Olivia, what would it mean for me to “refuse adoption” in universities when it is students who are the drivers for my courses and they are widely using AI in ways that are already forbidden?

I feel like the “resistance” and critique of inevitability talk isn’t quite connecting with my reality on the ground

@UlrikeHahn @olivia

I glanced at your work on "Science communication as collective intelligence".

I think that we need to redesign university courses so that learning is communal for the solution of a specific goal.

I think though that Olivia's points are complementary to that goal. Universities should resist AI adoption as imposed by external economic forces.

@apostolis @olivia I’ve have already redesigned both my assessments and my teaching in response to students’ AI use, but that kind of adaptation feels like it conceptually falls more into “inevitability” than “resist”

right now, what’s most valuable to me personally (given the starting point that every single student in my courses has somehow used AI, and a good proportion uses it *a lot*) is advice from other academics on how exactly they are trying to change what they do in response.

telling me “I can resist” doesn’t feel helpful in that way

@apostolis @olivia I guess a different way of putting this all is that for the multiple ways in which AI is currently negatively affecting my work, both in teaching and research, the drivers underlying the use are not ,industry forces’ in the way the quoted passage in Olivia’s post is assuming, it is the independent, voluntary action of other individuals within the system (students, other researchers)

that whole frame (industry forces) captures well what is happening in many jobs, but it doesn’t capture what is happening in mine

@apostolis @olivia the reason why this ultimately matters that pushing back against the real driver (the “organic” adoption of these tools by individuals) requires me to understand and engage with the perceived value and function these tools have for them…

…and that means trying to understand both what they can and what they can’t do. Simply declaring that these tools are garbage (“semantically meaningless random text generator”) isn’t useful for actually productively countering AI use in this configuration…(if they genuinely were meaningless random text generators I wouldn’t be faced with the negative effects in the first place).

the Fodor quote doesn’t feel like it’s aimed at that kind of understanding

@[email protected] @[email protected] @[email protected] There is very little that could be credibly called organic adoption when it comes to AI. It is being fiercely pushed in support of multiple hundreds of billion dollar investment. People are being told repeatedly, in every channel, that AI is inevitable, is here to stay, etc. It is disingenuous to place this responsibility at the feet of students, throw up your hands, or ask someone else to tell you what to do about it. That kind of behavior from people empowered to know and do better is the problem.

@abucci @UlrikeHahn @apostolis @olivia

Wouldn't an approach where the AIs have to pass the class as students, be better. After all, regurgitating data is not the way to learn how to think. As for the pollitical/economics of the whole mess, well, that's on us to some extent. It's a problem educated people deal with all the time, even among each other. IMHO, humanity is still growing up. We've not abandoned our superstitions for the hard real wonder of actual nature. Is AI part of our nature?

@lednaBM @abucci @apostolis @olivia if I understand you correctly, you are suggesting we, in an sense, embrace AI and treat it in such a way that makes it better (ie accept it as students)? if yes, I don’t personally really want to make AI systems ‘better’ - they are causing huge damage and disruption at current levels of performance. I’d personally rather put a brake on that.

@UlrikeHahn @abucci @apostolis @olivia

I understand what you're saying, and maybe language is not serving us well. You seemed to have juxtaposed helping it be better versus creating a disruption. And again, maybe I fully don't understand the dilemma. When I need very technical information that I can not recall or need help with, I would go to a book or a specification. Now, I can ask AI, check its results, and decide whether I can rely upon what's being presented. It's a tool... 1/2

@UlrikeHahn @abucci @apostolis @olivia 2/2 tools generally need calibration. Is it possible to use the disruption you speak of as a teaching moment? I don't know. Am I being foolish about the political/economic consequences of those benefitting from the disruption? Maybe. I agree with the original poster. We should have free education, health care, and representation in the way we govern ourselves. Problem there is, IMHO, the white elephant that is religion working against secular human values..
@lednaBM @abucci @apostolis @olivia I think one of the problems, particularly in the context of education, lies in the idea that “now I can use AI to give me an answer and check the results”. It is precisely the “ability to check the results” in a particular scientific or academic discipline that higher education degrees are trying to provide. Leaning on AI to “find” answers by students is undermining the learning of the skills that underpin “the ability to check”.
@UlrikeHahn @abucci @apostolis @olivia
That's a great point. Teaching youth only to rely upon AI sounds like a mistake. I guess I have trouble with the notion that AI is anything more than a tool. Its applications threaten a lot, probably a lot beyond its scope, but not beyond its profit scam. Hopefully, some applications are identified as misapplications. I'm reminded of Huxleys Brave New World. Will AI be the soma drug to placate the masses, even though they were designed to be placated.

@lednaBM @UlrikeHahn @abucci @apostolis @olivia At the risk of butting into this conversation, I think the problem here is that you think that "just a tool" is a neutral concept.

Tools, by their very nature, change the way we interact with the world. Cars are "just a tool", but dependence on cars for transport has both positive and negative effects, because of how their use changes how we behave (and what other things we want to change about the world now "we" want to use cars all the time). Is "car-using humanity" healthier than "pre-car humanity"?

In this sense, even if "AI is just a tool", the existence of cognitive tools *clearly* implies that use of them will change the way people behave - *regardless* of any concept of "applications being identified as misapplications". Dependence on a tool for *thinking* feels inherently more problematic than dependence on a tool for travelling distances...

@[email protected] @[email protected] @[email protected] @[email protected] @[email protected] The "just a tool" framing also does a great deal of heavy lifting for the political project that AI represents and forwards. What saddens me most is that this project is nearly transparent, its actors almost totally honest about what they are attempting to accomplish even as they dissemble about it. Yet we go around and around in circles about whether these things are "just" tools, or wring our hands about what to do about students using them, or waffle about whether the tools are useful or have this or that impact on productivity. These things are symptoms, not causes.

@abucci

They even tried "nuclear weapons are just tools", with Project Plowshare, saying we could use them for excavating canals or mining.

Boys with toys, or corporations hoping to get contracts, IDK, though at least most people knew this was ridiculous.

https://en.wikipedia.org/wiki/Project_Plowshare

@aoanla @lednaBM @UlrikeHahn @apostolis @olivia

Project Plowshare - Wikipedia

Respectfully @EricLawton

I understand your sentiment, but people were not as knowledgeable about radiation back then. So yes, these things were boasted about in ignorance. And yes, we see it today with the fools (Musk) of our time talking about colonizing Mars, not even thinking about how radioactive it is. Foolish talk is foolish talk in any century. This is why we have scientists and researchers to sort the how's, while engineers apply them.
@abucci @aoanla @UlrikeHahn @apostolis @olivia

@[email protected]
people were not as knowledgeable about radiation back then. So yes, these things were boasted about in ignorance.
You're trying to say that in 1961 people were ignorant of the impacts of nuclear radiation and acted out of ignorance? This does not pass a smell test, given that by that point nuclear radiation had been well-studied for over 60 years by large numbers of what were considered top scientists of their day. A very strange thing to assert.

Calling someone like Elon Musk a fool does work for him: you can't hold a fool accountable for doing things they lack the capacity to understand. Why are you invested in doing this work on his behalf? From where I stand I think he very much has the capacity to understand what he's up to.

@[email protected] @[email protected] @[email protected] @[email protected]

@abucci Your arguments are unfair. Ask Madame Curie. Applying the 60s, your date, ignores the nuclear testing of the 50s. How convenient. I'll stand by my statement. And then you go on to accuse me of helping EM? Really! Seems more like a desperate attempt at a social media hit job. Please refrain from commenting if you're here to just pick a fight. Bullies exist in every crowd. Bullying reflects more on you than I. Delete your post.....

Apologies to:
@aoanla @apostolis @olivia @EricLawton

@[email protected] Just to clarify: I took your post to be suggesting that people in the 1960s did not know much about the perils of radiation, and that this led to "boasting in ignorance" about what turned out to be dangerous technology. If that is not what you intended to argue, then I misread. If that is what you meant, then I stand by what I said: Curie started her work in the late 19th century and over 60 years had passed by the 1960s. Radiation and its perils were quite well understood by then.

I 100% stand by my assertion that calling someone like Elon Musk "a fool" makes space for him to continue to do what I fully believe are deliberate, conscious, and well understood harmful acts. Words like "fool", "imbecile", or "moron" have a history, and have been used to designate a group of people unable to be held responsible for their actions because they lack the cognitive capacity to understand them. Not everybody has that understanding of these words, but regardless they are inappropriate and distract from the more pressing reality that deliberate harmful acts are intolerable in a free society. That's the sense in which I meant one is doing Musk's work for him by calling him a fool.

If you characterize all the above as a "social media hit job", when it's meant and has the shape of a good faith discussion, then I don't know what to tell you.

@[email protected]
@[email protected]
Bullying reflects more on you than I. Delete your post.....
How is issuing a demand like "delete your post" not a form of bullying?

I will not delete my post, and I believe that what you are doing here is a form of projection. This conversation is over.