Met my MSc dissertation students this week. All good natured people. But the genAI rot is spreading.

About half of them do their work, and ask me questions about the problems they encounter. I advise on possible next steps. We meet again next week. All good.

But.

The other half, each of them perfectly well meaning, came back to me with questions that had nothing to do with their projects, and proposed solutions that are alien to the framework we are using. After some serious conversations, I found that in each case they had relied on chatGTP answers to their prompts. They had not read the actual papers I had given them.

Some had implemented equations that are patently false, not by error (this would be good for learning), but because chatGPT told them so.

A significant part our students can't read anymore. They need to interact with genAI, and they think this is research.

We are heading for trouble. In higher education, and in society at large.

#noAI #AcademicChatter

@the_roamer OMG and those are MSc level students. I expected more of people with at least half a brain.
@sybren @the_roamer I assure you that this is alas fairly common.
@the_roamer I have heard similar things from other academics and educators before, but would you care to expand on the "A significant part our students can't read anymore" comment?

I am interested in what you mean by this in particular

@froge

What I meant is this: they still have to learn, or re-learn, the need to engage with a given text, take notes, get confused, get back to it ... produce their own understanding from this struggle. Reading is active construction of knowledge by the reader. They want to buy knowledge. I don't blame them as individuals, it is the poisened air we all breathe.

#noAI

@the_roamer @froge keyword: struggle. And out of struggle: joy. Both being lost.

@mirijb2 @froge

Indeed. Out of struggle, joy. And self-ownership as a thinker and writer.

@the_roamer @froge Another aspect to this is imho the psychological aspect of motivation:

students (or people in general?) try to find the easiest way to solve a problem. For us before LLMs, we read the papers, but some read summaries if available, and some cheated (but that was unethical).

Now with LLMs there is a even easier (seemingly) solution available, so its hard/harder to find motivation to do the hard problem (do the actual reading).

@the_roamer I feel this. I have engineers I supervise that take shit dropping out of ChatGPT and the like for granted and not even bother to check original sources.
I'm OK with using genAI to get hints where to start taking information from, but blindly relying on these hallucinations leads down a dangerous road.

@bloc

I understand your point entirely, and most of my colleagues agree with you. I personally don't share that view, I would argue for human-curated entry points. I had given my students an introductory reference list snd we had a day-long introductory workshop as a group. These students did not read the articles, they reached for the grnAI summary. Once again, I don't blame them as individuals, it's the whole cultural environment.

#noAI

@the_roamer I think the point of human-curated references is even stronger in academia, where researching original material is an important skill. I guess my point is simply that for me, a line of crossed where people start trusting AI on things which they do themselves not understand anymore. That's where the ability to learn gets lost and humanity gets into a downslope.

@bloc @the_roamer At that point, an LLM is just a wildly inefficient search engine which can’t tell you where the information came from, and which frequently makes up nonsense.

I get that somebody extremely new to a field won’t even know the right vocabulary to use to ask their question or look something up. That’s a problem, for sure, but it’s not one even second-year undergrad students should encounter often, let alone grad students or people working as engineers in the field.

@bob_zim @bloc @the_roamer

"At that point, an LLM is just a wildly inefficient search engine which can’t tell you where the information came from, and which frequently makes up nonsense."

Is this your personal experience or you are "making up nonsense"?
Maybe consider PAYING for the engine and not using the sideshow-booth cripple-ware free version?
Consider changing the engine?

My LLM of choice (#Claude) cites all the web search sources, shows the reasoning path and is more accurate than Google search in results.

@the_roamer Yeah, honestly fuck LLMs. They made the world worse and honestly less convenient. (You can't call it convenience when you aren't getting anything solved or learn anything from it.)

Those who don't bother doing work themselves should flunk. Learn the hard way.

@alteNBnordpfalz

They are pushed into this behsviour by powerful cultural forces. I praise the students who have the guts to do the real work, but I don't blame those who have yet to learn how to engage. (Of course, in my individual meetings I did scold them, and showed them how unproductive their approach was.)

#noAI

@the_roamer Haven’t hit this yet, but I’m sure it’s coming.
@the_roamer This isn’t new or a direct consequence of AI: students had trouble reading for the last decade, and French teachers associate this with social media addiction. IQ results are collapsing globally, IMO because the test itself is biased toward reading and writing.
@oceane @the_roamer In the early 2000's when I got my MSc the situation in my country was so bad that my Uni started organizing language classes for new students, to try to improve writing and reading comprehension. The problem started even before the 2010's and predates modern social media.

@oceane
Interesting, troubling, & I'm sure that French teachers are right: "students had trouble reading for the last decade, and French teachers associate this with social media addiction. IQ results are collapsing globally, IMO because the test itself is biased toward reading and writing." 😐
One way to answer those pesky questions on what so-called IQ tests actually measure...

@the_roamer

@oceane

I am not worried about IQ scores, but I agree with your point, there is a long-term trend. But the arrival of chatGPT in 2022 has led to a qualitative jump. Since then, in my context as a university teacher, I observe a dramatic deterioration of students' ability to actively engage with the material and the process: read the articles, take notes in lectures, do the exercise. Not all students, but a significant proportion, and even the best students are affected.

#noAI

@the_roamer I’m not worried about the IQ decline either: it isn’t meant as an intelligence test but as an intellectual potential test. Regardless of their IQ, a Twitter addict is an idiot.

On the introduction of AI into university curriculums, I don’t know what to say — I’ve only used AI twice and I regret it. Non-corporate users comfort the market, bring more investors, and become complicit of warlords in Congo. I’ve become complicit of murders there, and I can’t express my guilt and regrets in any meaningful way.

Free software activists’ lack of narrative about this is deeply disturbing.

@the_roamer Awful, yet unsurprising. There are productive ways to use genAI, and then there's this.
@jalager @the_roamer people keep saying this. But most of these are inferior, wasteful, and ethically dodgy.
@the_roamer
#CliffsNotes were popular when I was in highschool in the late 1970's; I believe they're still sold. They were very condensed versions of books ~ you could buy the Cliffs Notes without reading the assigned book for your semester at school; you miss out on the wonderful details of Actually READING a book though!
Times have changed with technology, but people haven't.

@TrueBlue4THREE @the_roamer I think they’re probably still around if the internet hadn’t killed them, they were going strong in the 90s

The thing is, you learn more from Cliffs Notes than you do from an LLM. It’s at least accurate. Read the Cliffs Notes for a book, and you know basically what’s in the book. Ask and LLM and you’ll get something plausible but likely wrong

Cliff also doesn’t write the book report for you 😆

@the_roamer

I'm not defending the students who use genAI, but at least in physics reading the papers is an absolute chore and most are ridden with jargon. It's frustrating

@clockwooork @the_roamer - I think that could be said about a lot of journal papers. I teach physics and I don’t care if they use ChatGPT. It can be helpful for learning but I remind them that 1) it makes mistakes constantly and 2) using it for copy/pasting homework likely means you will fail the exam.

At the grad level, I think it depends on the implementation. Some are good at explaining what authors did. Others send you down an often wrong rabbit hole (ChatGPT). Just personal experience

@cosmicspittle @clockwooork @the_roamer It can be said of reading anything longer than a newspaper article; it's a chore when unpracticed and the skill isn't developed
@the_roamer I've noticed this with MSc level essays this past year. Some don't even check the AI notes they get, and cite non-existent papers. Some cite real papers for things they didn't say - instead of course readings that have all the information. And so on. So much AI use.
@the_roamer yes yes yes. I’m gonna tag you in another thread. What is to be done?

@mirijb2

That is the question!

I have no clear answers.

@the_roamer I am going to try again with undergrads in the year to come. My grad students are not in this place, fortunately.

@the_roamer

Sounds like you are remis in teaching 50% of your class hoe to appropriately use the tools they have 👹

Academics themselves use AI to read papers (worse review papers).

They who are without sin, let them cast the first stone.

@n_dimension How is that @the_roamer's job? To teach students how to use tools that are not even the right ones for the job? To teach them to read the assignment rather than guessing wildly and spewing out random and irrelevant questions?
And that's not even beginning to touch upon the fundamental ethical problems arising from nearly any use of current genai services (certainly those used by these students) - like training on stolen material and burning up the planet in the process, all while making research and innovation grind to a halt and rendering the next generation even more inept than the current one.

@ltning @the_roamer

It's why you get the big bucks Prof!
Figure it out Einstein 😁

@ltning

I keep up the hope that the bubble ultimately will burst. We must keep the notion of truth alive.

@n_dimension

I don't use genAI for anything.

@the_roamer It is tragically hilarious what AI is bringing….billions & billions $$ being spent to expand & deepen this next tech absolute pile of garbage. Social media & its easy destruction of interpersonal real life relationships & lies & misinfo accelerating the lazy ignorance of the everyday citizen. No wonder such shitty people as trump, orban get elected, and billionaires corrupt the vital function of sane governance. And AI will just hasten the spread of this poison.

@havvyhh2

We must maintain the flame of truth through this difficult period.

@the_roamer ...though the winds of lies & ignorance whistle through the neighborhoods....

@the_roamer

Thanks for the easy-to-read summary(!) including: "The other half [of your MSc dissertation students] each of them perfectly well meaning, came back to me with questions that had nothing to do with their projects, and proposed solutions that are alien to the framework we are using. After some serious conversations, I found that in each case they had relied on chatGTP answers to their prompts. They had not read the actual papers I had given them..." & they "can't read anymore. They need to interact with genAI, and they think this is research..."😐
Alarming that it's HALF your students & that they're all perfectly well meaning... In other words this is the overwhelming future... & when that is so, who's going to be doing the actual science?

@Su_G @the_roamer Yes. By their nature, LLMs are remixing the known, which might occasionally be slightly innovative, but truly novel things require humans operating their brains at full capacity. I think it's inevitable that the pace of human innovation will likely slow, at least until we can come to terms with this problem.

@scottmiller42 @Su_G @the_roamer

so the snakeoil is "general AI will disrupt history by speeding up innovation in software, biopharma, ..." with the observation "while 50% of the master students ability to think and innovate is reduced, the AI cannot invent anything new but only repeat whatever it finds on the web, both have no idea about fact checking".

It maps with my own observation in the semantic web field. The majority of people are "too lazy to look for correct data" or writing Wikipedia articles or blogposts. Now they hand over the task to LLMs: "find me the best product for this problem". What data will the LLM use? SEO and marketing folks are already publishing biased data on the web knowing this is fodder to the LLM which then feeds it on to your students. Marketers using LLMs to generate gibberish to publish so that LLMs have something to feed to users.

The opposite were personal AI semantic web assistants in the year 2010 like NEPOMUK or Siri, based on facts from the linked open data cloud.

@leobard @scottmiller42 @Su_G

An interesting observstion, an inversion between automated use of web data and automated production of those data.

@the_roamer @leobard @scottmiller42

An interesting observation in itself! The movement between automated finding to automated production… & the output quality degrades along the way…

A Spanish educator just published a multipart thread pulling together various AI-related data points, one of which tracked memory & another measured brain activity. Students using LLMs couldn’t quote what they wrote; & had the lowest brain activity (cf using a search function or native own brain).

As I see it then, the AI “disruption” is in the zombification direction.

LLM AIs will lead you to fact-based sources, if you are willing to pay for it

@Su_G @the_roamer @scottmiller42 – IMHO the way out is that students and other LLM users can see hyperlinks to fact-checked sources to judge themselfes if a genai text is legit. Instead of marketing/ads, I would like to see micropayments to content creators. Good ol “explainable AI” rolled 🌯 into a business model.

1️⃣ Users 🙂

must be motivated to pay for “good ai system 🤖” and

compare the true cost of AI with “RTFM and use my own 🧠 to […]

https://www.leobard.net/blog/2025/07/26/llm-ais-will-lead-you-to-fact-based-sources-if-you-are-willing-to-pay-for-it/

LLM AIs will lead you to fact-based sources, if you are willing to pay for it - Leobard's blog

@Su_G @the_roamer @scottmiller42 – IMHO the way out is that students and other LLM users can see hyperlinks to fact-checked sources to judge themselfes if a genai text is legit. Instead of marketing/ads, I would like to see micropayments to content creators. Good ol “explainable AI” rolled 🌯 into a business model. 1️⃣ Users 🙂 … Continue reading "LLM AIs will lead you to fact-based sources, if you are willing to pay for it"

Leobard's blog

@Su_G

Indeed. Ultimately we need people who produce knowledge, rather than summarise it. That is why I think there is hope that eventually the bubble will burst.

#noAI

@the_roamer

Yes! “Ultimately we need people who produce knowledge, rather than summarise it”. And an interesting modern dichotomy: produce vs summarise too. I really hope that you’re right about AI. 🙂

@the_roamer That feeling in the depths of a nightmare when you're in the grip of such a terror that you cannot scream

@jameshowell

My initial toot was my scream! :-)

I am delighted that it wasn't a scream into the void!

@the_roamer I teach in a second year algorithms course.

AI use is painfully obvious, even for doing the practice exercises. A few of them are frank about it ("I asked 'the chat' but I don't understand the answer"), others not so much.

A guy came to me with written code "to check if it is ok", he admitted the copilot auto complete was doing the work for him. I showed him how to disable it (he didn't even know how!) for doing exercise work.

In person exams are our last line of defense IMHO

@sherwoodinc

Love the "I showed him how to disable it", probably the most important learning experience for this young man this year! :-)

Agree on the value of in-person exams, likewise other in-person setups.

Eg, for tutorials I will no longer pre-publish the exercises but do everything within the classroom. (To facilitate this, in future I will move to 1 two-hour class every fortnight, rather than 2 one-hour classes over the same period. Too late to make that change for 25/26.)

Also, thinking of dedicating a whole class to reading a small section of an article together, line by line.

#noAI

@the_roamer With all due respect, but any student incapable or unwilling to read, understand and critique papers is unfit to earn the degree that they are after.

The minimum I expect from university graduates is the ability to find, assess and use information appropriately. To identify the skills needed to solve a problem and acquire them. Not parrot an llm without an ounce of critical thinking or mental effort. In the end, this is technology fueled copycat behaviour.

@kgndiue

"With all due respect ...", please explain, which of my points are you disagreeing with?