Met my MSc dissertation students this week. All good natured people. But the genAI rot is spreading.

About half of them do their work, and ask me questions about the problems they encounter. I advise on possible next steps. We meet again next week. All good.

But.

The other half, each of them perfectly well meaning, came back to me with questions that had nothing to do with their projects, and proposed solutions that are alien to the framework we are using. After some serious conversations, I found that in each case they had relied on chatGTP answers to their prompts. They had not read the actual papers I had given them.

Some had implemented equations that are patently false, not by error (this would be good for learning), but because chatGPT told them so.

A significant part our students can't read anymore. They need to interact with genAI, and they think this is research.

We are heading for trouble. In higher education, and in society at large.

#noAI #AcademicChatter

@the_roamer I feel this. I have engineers I supervise that take shit dropping out of ChatGPT and the like for granted and not even bother to check original sources.
I'm OK with using genAI to get hints where to start taking information from, but blindly relying on these hallucinations leads down a dangerous road.

@bloc

I understand your point entirely, and most of my colleagues agree with you. I personally don't share that view, I would argue for human-curated entry points. I had given my students an introductory reference list snd we had a day-long introductory workshop as a group. These students did not read the articles, they reached for the grnAI summary. Once again, I don't blame them as individuals, it's the whole cultural environment.

#noAI

@the_roamer I think the point of human-curated references is even stronger in academia, where researching original material is an important skill. I guess my point is simply that for me, a line of crossed where people start trusting AI on things which they do themselves not understand anymore. That's where the ability to learn gets lost and humanity gets into a downslope.

@bloc @the_roamer At that point, an LLM is just a wildly inefficient search engine which can’t tell you where the information came from, and which frequently makes up nonsense.

I get that somebody extremely new to a field won’t even know the right vocabulary to use to ask their question or look something up. That’s a problem, for sure, but it’s not one even second-year undergrad students should encounter often, let alone grad students or people working as engineers in the field.

@bob_zim @bloc @the_roamer

"At that point, an LLM is just a wildly inefficient search engine which can’t tell you where the information came from, and which frequently makes up nonsense."

Is this your personal experience or you are "making up nonsense"?
Maybe consider PAYING for the engine and not using the sideshow-booth cripple-ware free version?
Consider changing the engine?

My LLM of choice (#Claude) cites all the web search sources, shows the reasoning path and is more accurate than Google search in results.