Met my MSc dissertation students this week. All good natured people. But the genAI rot is spreading.

About half of them do their work, and ask me questions about the problems they encounter. I advise on possible next steps. We meet again next week. All good.

But.

The other half, each of them perfectly well meaning, came back to me with questions that had nothing to do with their projects, and proposed solutions that are alien to the framework we are using. After some serious conversations, I found that in each case they had relied on chatGTP answers to their prompts. They had not read the actual papers I had given them.

Some had implemented equations that are patently false, not by error (this would be good for learning), but because chatGPT told them so.

A significant part our students can't read anymore. They need to interact with genAI, and they think this is research.

We are heading for trouble. In higher education, and in society at large.

#noAI #AcademicChatter

@the_roamer

Thanks for the easy-to-read summary(!) including: "The other half [of your MSc dissertation students] each of them perfectly well meaning, came back to me with questions that had nothing to do with their projects, and proposed solutions that are alien to the framework we are using. After some serious conversations, I found that in each case they had relied on chatGTP answers to their prompts. They had not read the actual papers I had given them..." & they "can't read anymore. They need to interact with genAI, and they think this is research..."😐
Alarming that it's HALF your students & that they're all perfectly well meaning... In other words this is the overwhelming future... & when that is so, who's going to be doing the actual science?

@Su_G @the_roamer Yes. By their nature, LLMs are remixing the known, which might occasionally be slightly innovative, but truly novel things require humans operating their brains at full capacity. I think it's inevitable that the pace of human innovation will likely slow, at least until we can come to terms with this problem.

@scottmiller42 @Su_G @the_roamer

so the snakeoil is "general AI will disrupt history by speeding up innovation in software, biopharma, ..." with the observation "while 50% of the master students ability to think and innovate is reduced, the AI cannot invent anything new but only repeat whatever it finds on the web, both have no idea about fact checking".

It maps with my own observation in the semantic web field. The majority of people are "too lazy to look for correct data" or writing Wikipedia articles or blogposts. Now they hand over the task to LLMs: "find me the best product for this problem". What data will the LLM use? SEO and marketing folks are already publishing biased data on the web knowing this is fodder to the LLM which then feeds it on to your students. Marketers using LLMs to generate gibberish to publish so that LLMs have something to feed to users.

The opposite were personal AI semantic web assistants in the year 2010 like NEPOMUK or Siri, based on facts from the linked open data cloud.

@leobard @scottmiller42 @Su_G

An interesting observstion, an inversion between automated use of web data and automated production of those data.

@the_roamer @leobard @scottmiller42

An interesting observation in itself! The movement between automated finding to automated production… & the output quality degrades along the way…

A Spanish educator just published a multipart thread pulling together various AI-related data points, one of which tracked memory & another measured brain activity. Students using LLMs couldn’t quote what they wrote; & had the lowest brain activity (cf using a search function or native own brain).

As I see it then, the AI “disruption” is in the zombification direction.

LLM AIs will lead you to fact-based sources, if you are willing to pay for it

@Su_G @the_roamer @scottmiller42 – IMHO the way out is that students and other LLM users can see hyperlinks to fact-checked sources to judge themselfes if a genai text is legit. Instead of marketing/ads, I would like to see micropayments to content creators. Good ol “explainable AI” rolled 🌯 into a business model.

1️⃣ Users 🙂

must be motivated to pay for “good ai system 🤖” and

compare the true cost of AI with “RTFM and use my own 🧠 to […]

https://www.leobard.net/blog/2025/07/26/llm-ais-will-lead-you-to-fact-based-sources-if-you-are-willing-to-pay-for-it/

LLM AIs will lead you to fact-based sources, if you are willing to pay for it - Leobard's blog

@Su_G @the_roamer @scottmiller42 – IMHO the way out is that students and other LLM users can see hyperlinks to fact-checked sources to judge themselfes if a genai text is legit. Instead of marketing/ads, I would like to see micropayments to content creators. Good ol “explainable AI” rolled 🌯 into a business model. 1️⃣ Users 🙂 … Continue reading "LLM AIs will lead you to fact-based sources, if you are willing to pay for it"

Leobard's blog

@Su_G

Indeed. Ultimately we need people who produce knowledge, rather than summarise it. That is why I think there is hope that eventually the bubble will burst.

#noAI

@the_roamer

Yes! “Ultimately we need people who produce knowledge, rather than summarise it”. And an interesting modern dichotomy: produce vs summarise too. I really hope that you’re right about AI. 🙂