AI is not inevitable. Nothing in human societies is inevitable because we design them. Healthcare can be free for the public. Books can be bought instead of bombs. Universities can be free for students, and they can even receive a stipend to live off. Don't let companies dictate the future.

Read more in section 3.2 here https://doi.org/10.5281/zenodo.17065099

@olivia Olivia, what would it mean for me to “refuse adoption” in universities when it is students who are the drivers for my courses and they are widely using AI in ways that are already forbidden?

I feel like the “resistance” and critique of inevitability talk isn’t quite connecting with my reality on the ground

@UlrikeHahn @olivia

I glanced at your work on "Science communication as collective intelligence".

I think that we need to redesign university courses so that learning is communal for the solution of a specific goal.

I think though that Olivia's points are complementary to that goal. Universities should resist AI adoption as imposed by external economic forces.

@apostolis @olivia I’ve have already redesigned both my assessments and my teaching in response to students’ AI use, but that kind of adaptation feels like it conceptually falls more into “inevitability” than “resist”

right now, what’s most valuable to me personally (given the starting point that every single student in my courses has somehow used AI, and a good proportion uses it *a lot*) is advice from other academics on how exactly they are trying to change what they do in response.

telling me “I can resist” doesn’t feel helpful in that way

@apostolis @olivia I guess a different way of putting this all is that for the multiple ways in which AI is currently negatively affecting my work, both in teaching and research, the drivers underlying the use are not ,industry forces’ in the way the quoted passage in Olivia’s post is assuming, it is the independent, voluntary action of other individuals within the system (students, other researchers)

that whole frame (industry forces) captures well what is happening in many jobs, but it doesn’t capture what is happening in mine

@apostolis @olivia the reason why this ultimately matters that pushing back against the real driver (the “organic” adoption of these tools by individuals) requires me to understand and engage with the perceived value and function these tools have for them…

…and that means trying to understand both what they can and what they can’t do. Simply declaring that these tools are garbage (“semantically meaningless random text generator”) isn’t useful for actually productively countering AI use in this configuration…(if they genuinely were meaningless random text generators I wouldn’t be faced with the negative effects in the first place).

the Fodor quote doesn’t feel like it’s aimed at that kind of understanding

@[email protected] @[email protected] @[email protected] There is very little that could be credibly called organic adoption when it comes to AI. It is being fiercely pushed in support of multiple hundreds of billion dollar investment. People are being told repeatedly, in every channel, that AI is inevitable, is here to stay, etc. It is disingenuous to place this responsibility at the feet of students, throw up your hands, or ask someone else to tell you what to do about it. That kind of behavior from people empowered to know and do better is the problem.

@abucci @UlrikeHahn @apostolis @olivia

Wouldn't an approach where the AIs have to pass the class as students, be better. After all, regurgitating data is not the way to learn how to think. As for the pollitical/economics of the whole mess, well, that's on us to some extent. It's a problem educated people deal with all the time, even among each other. IMHO, humanity is still growing up. We've not abandoned our superstitions for the hard real wonder of actual nature. Is AI part of our nature?

@lednaBM @abucci @apostolis @olivia if I understand you correctly, you are suggesting we, in an sense, embrace AI and treat it in such a way that makes it better (ie accept it as students)? if yes, I don’t personally really want to make AI systems ‘better’ - they are causing huge damage and disruption at current levels of performance. I’d personally rather put a brake on that.

@UlrikeHahn @abucci @apostolis @olivia

I understand what you're saying, and maybe language is not serving us well. You seemed to have juxtaposed helping it be better versus creating a disruption. And again, maybe I fully don't understand the dilemma. When I need very technical information that I can not recall or need help with, I would go to a book or a specification. Now, I can ask AI, check its results, and decide whether I can rely upon what's being presented. It's a tool... 1/2

@UlrikeHahn @abucci @apostolis @olivia 2/2 tools generally need calibration. Is it possible to use the disruption you speak of as a teaching moment? I don't know. Am I being foolish about the political/economic consequences of those benefitting from the disruption? Maybe. I agree with the original poster. We should have free education, health care, and representation in the way we govern ourselves. Problem there is, IMHO, the white elephant that is religion working against secular human values..
@lednaBM @abucci @apostolis @olivia I think one of the problems, particularly in the context of education, lies in the idea that “now I can use AI to give me an answer and check the results”. It is precisely the “ability to check the results” in a particular scientific or academic discipline that higher education degrees are trying to provide. Leaning on AI to “find” answers by students is undermining the learning of the skills that underpin “the ability to check”.
@UlrikeHahn @abucci @apostolis @olivia
That's a great point. Teaching youth only to rely upon AI sounds like a mistake. I guess I have trouble with the notion that AI is anything more than a tool. Its applications threaten a lot, probably a lot beyond its scope, but not beyond its profit scam. Hopefully, some applications are identified as misapplications. I'm reminded of Huxleys Brave New World. Will AI be the soma drug to placate the masses, even though they were designed to be placated.

@lednaBM @UlrikeHahn @abucci @apostolis @olivia At the risk of butting into this conversation, I think the problem here is that you think that "just a tool" is a neutral concept.

Tools, by their very nature, change the way we interact with the world. Cars are "just a tool", but dependence on cars for transport has both positive and negative effects, because of how their use changes how we behave (and what other things we want to change about the world now "we" want to use cars all the time). Is "car-using humanity" healthier than "pre-car humanity"?

In this sense, even if "AI is just a tool", the existence of cognitive tools *clearly* implies that use of them will change the way people behave - *regardless* of any concept of "applications being identified as misapplications". Dependence on a tool for *thinking* feels inherently more problematic than dependence on a tool for travelling distances...

@[email protected] @[email protected] @[email protected] @[email protected] @[email protected] The "just a tool" framing also does a great deal of heavy lifting for the political project that AI represents and forwards. What saddens me most is that this project is nearly transparent, its actors almost totally honest about what they are attempting to accomplish even as they dissemble about it. Yet we go around and around in circles about whether these things are "just" tools, or wring our hands about what to do about students using them, or waffle about whether the tools are useful or have this or that impact on productivity. These things are symptoms, not causes.

@abucci

They even tried "nuclear weapons are just tools", with Project Plowshare, saying we could use them for excavating canals or mining.

Boys with toys, or corporations hoping to get contracts, IDK, though at least most people knew this was ridiculous.

https://en.wikipedia.org/wiki/Project_Plowshare

@aoanla @lednaBM @UlrikeHahn @apostolis @olivia

Project Plowshare - Wikipedia

Respectfully @EricLawton

I understand your sentiment, but people were not as knowledgeable about radiation back then. So yes, these things were boasted about in ignorance. And yes, we see it today with the fools (Musk) of our time talking about colonizing Mars, not even thinking about how radioactive it is. Foolish talk is foolish talk in any century. This is why we have scientists and researchers to sort the how's, while engineers apply them.
@abucci @aoanla @UlrikeHahn @apostolis @olivia

@lednaBM

The images from Hiroshima and Nagasaki should have been more than enough.

We already knew about strontium-90 in milk, from fallout from tests. https://pmc.ncbi.nlm.nih.gov/articles/PMC2134381/

Not being knowledgeable owed more to H.L. Menken's principle than the knowledge not being available.

“It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”

@abucci @aoanla @UlrikeHahn @apostolis @olivia

Strontium 90 in dairy milk produced in central Europe during 1957-1960

PubMed Central (PMC)
@EricLawton @lednaBM @abucci @aoanla @UlrikeHahn @apostolis also see the top linked EU study on late lessons here https://olivia.science/before/#what
We've been here before!

Parallels between AI and tobacco, and other warnings.

https://olivia.science
@EricLawton @lednaBM @abucci @aoanla @UlrikeHahn @apostolis also pay extra attention to the parts where they discuss the precautionary principle "not knowing" especially at government levels is not an excuse
@EricLawton @lednaBM @abucci @aoanla @UlrikeHahn @apostolis I'm going to mute this conversation because it's not something I can realistically follow (sorry) if you need me please bear that in mind 🫡