Incredibly disappointing presentation at my uni about A.I. in Higher Ed. It wasn't university-official, but it was consonant with the noises we've heard from the Big Fancy Admin Building. The presentation was by a #computerScience prof, and it was clear where his perspective was going to be from the first 10 seconds, when he said he had created a startup to use AI to help businesses automate their processes.

The first substantive slide was kind of a leaderboard, showing what percentage of the companies in the US (healthcare, banking, etc.) had adopted AI, which he talked about like a horse race, with some industries being "ahead" and others "behind." #HigherEd was very much "behind" in AI adoption. It was pretty much downhill from there.

He had suggestions for improving AI adoption in our university, suggestions for how faculty can incorporate AI into our coursework, etc. Building AI programs (our uni is doing this in lockstep with every other school) is not enough, apparently. He not-so-sadly noted that certain #teaching goals were just no longer realistic, like "understanding concepts". Instead, we might focus on outcomes. He had a slide with big words: "[What?] instead of [How?]", meaning teaching students pragmatic getting-it-done skills and focusing much less on how well they understand the processes that led there.

This might make sense for some people in computer science. It is pretty horrifying for someone in the #SocialSciences and I assume even worse for someone in the #humanities or teaching any kind of #art.

#horror #professor #corporate #education

As is often the case with my colleagues, I was most disheartened by the responses of the 20-odd faculty in the room with me. That is to say, there was no response. Nobody asked the glaring questions. There were perhaps three softball questions and one meandering question whose point I don't think anyone followed.

There was exactly one slide on #ethics. It said "Ethical AI." It had some bullet points about helping students not cheat (spoiler: most of it is to encourage AI use and stop thinking of it as "cheating"). Nothing about the (to me) much larger ethical issues.

Toward the end of the talk I asked a question (which was kind of long): I listed, in 30 seconds or so, some of the evidence for the extreme harm the wholesale adoption of AI is causing to the environment, to wealth concentration, democratic processes, political stability, copyright and IP ethics, etc. I asked if he thought AI should come with a price tag reflecting the currently-externalized costs. Of course he said that wasn't realistic. No surprises.

The surprise is that nobody else said jack shit about any of this before, during, or after my comment. This is in keeping with other conversations I've had with fellow faculty: 95% of them seem to have immediately flipped to "go along to get along" and "ignore the weirdo saying unpleasant things."

My partner said she was recently in a state government workers' webinar with dozens of attendees about similar topics. One person asked a question very similar to mine: how do you balance the benefits of AI with its clear, extreme harms? The presenter apparently completely ignored the question and went on as if nobody had spoken.

#disappointment #ai #WeAreFucked #WakeUp #FuckThis

Almost forgot the worst part (for me). After I asked my long question about AI harms, his response was "It's not the university's business to focus on these things."

I asked him who was going to focus on them if universities don't. His answer was more or less "What can you do? AI is here to stay."

@guyjantic I admit, the way so many people are just… accepting that generative language models are inevitable and unstoppable baffles me.

@guyjantic he's not a real CS person. I'm just in IT, with decades of experience. LLMs just model language. They are fancy sentence diagramming tools. Apparently this is a lost art today, and people think it's magic.

LLMs are the Wizard of Oz. And if you really understand that story, you know the ending.

@guyjantic Speaking as a computer scientist, your colleague sounds like a real loser. The learning steps are not optional.
@aarbrk I have had (actually, currently have) some good CS friends who agree with you. I think, in any field at a university where there is money to be made by many graduates or faculty, directly, from application of the knowledge we swim in, there will be larger numbers of people who see that as acceptable or even laudable. Not everyone does, however, and that keeps me sane on some days.
@guyjantic I see the trend towards educational transactionality, and indeed I'm guilty of going with the flow in some ways. The fact that the employment outlook for CS grads looks worse than ever has done nothing but reinforce this view. All I know how to do from here is talk shit about it before the inevitable reckoning plays out.
@aarbrk If it helps at all, I'm in psychology and I think I'm on a similar trajectory.
@guyjantic I would have walked out after a while of that. There doesn't seem to have been much to gain from staying.
@scooter Reasonable action. I wanted to see where he was going (no surprises, sadly), and if any of the other faculty in the room would express any discomfort with the "ignore all ethical and moral concerns" approach. They did not.