if you haven't taught someone who is helplessly addicted to LLMs, LLM brain is so much worse than you can possibly imagine. the problems i'm seeing from someone i am currently teaching are indistinguishable from illiteracy - this person literally cannot read single-line, fully descriptive error messages, and proceeds to just copy and paste whatever they say into the chatbot and copy/paste whatever it spits out, and in this problem domain the LLM is wrong nearly 100% of the time. when i ask them to stop and think about what the problem is, what steps they might need to take to diagnose and resolve it, and how does that fit in with the context of what we've been doing together for >6 months, they literally can't.

Edit: Oop this post escaped containment. I am not implying that it is impossible to learn with an LLM, so if it works for you then that is basically irrelevant to my description of this one very specific pattern of learning with which I have direct and repeated experience. This person is otherwise very smart and competent, I am describing the impact of the LLM on their mode of learning the things I am trying to teach them.

the LLM is not a learning aid, it is an absolute barrier to learning. it is not similar in kind to copy/pasting from stackoverflow or cliffs notes. it fully supplants the entire process of learning, and the person using the LLM never improves their understanding because the LLM cognitive workflow never engages with even the shape of the problem.

@jonny

Explain yourself. Because I don't accept any of this,

@tuban_muzuru i did explain myself, and that's fine you can go ahead and not accept it, that doesn't really matter to me.

@jonny

Apologies all round.

@tuban_muzuru all good. another day on the internet.

@jonny

It took quite some time to come to terms with the specific point of - the LLM is on occasion more in the way than anything else.. It's not everyone's best mode of instruction.

An argument I completely misunderstood.

@jonny For me the LLMs are big big learning tools.

@simondueckert How do you trust their output?

I've never used any of this LLM stuff because the software appears to be incorrect at a high enough rate as to make them, at best, unreliable.

What part of the process does it speed up? To me it appears to just be adding an extra layer of fact-checking onto the traditional research process.

@sidereal How do I trust the output of humans? It's just the same process.

Agreed, it's the same process.

With humans that I know frequently get things wrong, sometimes make things up out of whole cloth, and have racist and anti-trans biases ... I don't trust them.

LLMs are frequently inaccurate, sometimes make things up out of whole cloth, and have racist and anti-trans biases. So I don't trust them.

In both cases, when there's a situation I have to pay attention to what they say, I'll spend the time to check the sources they cite both to assure they're factually correct and that their racism and anti-trans biases haven't distorted things (for example by omitting context). That's enormously time-consuming, so at least for me it's more efficient to use sources that are more accurate and less racist and anti-trans.

@simondueckert @sidereal

And at least for me, this is especially true for learning. Do I really want to learn use a racist, anti-trans tech artifact that's frequently wrong to help me learn?

Well, if I'm specifically looking at how tech embeds and magnifiies existing societal dimensions of oppressions, they're potentially an interesting artifact to study. Other than that, though ... not so much.

Like most people, I'm vulnerable to being manipulated by racist and anti-trans propaganda. So why go out of my way to expose myself to stuff that I know spreads racist and anti-trans propaganda?

@simondueckert @sidereal

you cannot interrogate an LLM as to why it produces the completion it produces.
You can interrogate a human. And if the human proves unreliable then YOU JUST
LEARNED the human is unreliable which you can't learn from an LLM, because they
have no concept of state or context. They only "remember" because in the back end
the questions you previously asked are prepended to whatever you ask now.

CC: @[email protected]
@pkw @sidereal Agree. I compare the use of LLMs to google or search engines in general to get access to internet content. I also dont know if the top 10 links are "truly" the best sources for my request. It's up to my digital literacy to judge if its helpful or nor. Same counts for LLMs.

@jonny The irony of when i was told at my internship to try out Cursor at my own pace, i knew it would be a severely bad choice, because once you start inputting prompts, you try to lock your head into "just doing minor enhancements" but then you instantly get attracted by the imperfect power of the LLMs, causing you to enter into vibe coding.

Had to uninstall it because i don't even like relying to an LLM for coding. Copilot, Cursor... i keep repeating a mantra to not touch them all the time.

@AleF2050 @jonny as a (sometime) AI researcher and CS instructor, it makes me happy to see you making the correct choice here.
@jonny have you got sources on that? Just curious!
@tinstargames my sources are the 6 months of teaching that i am describing in the OP. this is a description of my experience teaching not statement of metaphysical truth
@jonny interesting! I am hearing otherwise is all. I am surprised - but I don't doubt you.
@tinstargames @jonny

i kinda think the burden of proof should be on the people making the LLM's and selling it as a learning aid, they are making all kinds of wild claims that are unproven, and most of the research I've seen confirms what jonny and many others have been saying for a while
https://www.media.mit.edu/publications/your-brain-on-chatgpt/
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task – MIT Media Lab

This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and …

MIT Media Lab
@mook @jonny Cool. I've just had a few teachers say to me that the thing that often helps students learn something is to have it repeated back to them a few different ways, and LLMs doing that in busy 35-kid classrooms was helpful.
@tinstargames @jonny

i think that's more reflective of the kinds of pressures that teachers are under than a pro for AI, what i'm hearing is schools are desperately underfunded/understaffed, and in that situation it must be very difficult for a teacher to monitor whether or not the AI summaries are accurate all of the time

teachers are now more or less being forced to use AI and given a one sided perspective from the administration, as the school system makes contracts, with AI firms who are not remotely objective.

So they're rolling all this out on a mass scale with zero testing, using millions of kids as ginea pigs, i hope it has some benefits.

@jonny @tinstargames

My own experiences, teaching in a different discipline, are identical to yours. A significant proportion of my students can only conceive of learning as genAI prompt management.

A few days ago, we had a lively discussion of related points here:

https://mastodonapp.uk/@the_roamer/114910076447146257

#noAI #PromptingIsNotLearning

the roamer (@[email protected])

Met my MSc dissertation students this week. All good natured people. But the genAI rot is spreading. About half of them do their work, and ask me questions about the problems they encounter. I advise on possible next steps. We meet again next week. All good. But. The other half, each of them perfectly well meaning, came back to me with questions that had nothing to do with their projects, and proposed solutions that are alien to the framework we are using. After some serious conversations, I found that in each case they had relied on chatGTP answers to their prompts. They had not read the actual papers I had given them. Some had implemented equations that are patently false, not by error (this would be good for learning), but because chatGPT told them so. A significant part our students can't read anymore. They need to interact with genAI, and they think this is research. We are heading for trouble. In higher education, and in society at large. #noAI #AcademicChatter

Mastodon App UK
@jonny choosing the LLM is choosing the lazy path. it's a path with instant reward but no long term one.

@Spriggan @jonny

Quicker. Easier. More seductive

@jonny I am just finishing writing my master thesis about dialogic learning with LLM. My evidence suggests otherwise.
@dgavin cool, read the edit in the OP

@dgavin @jonny
If you ever publicly share this, I’d love a link. I’ve certainly seen the above, where LLMs seem to short-circuit people’s thinking; I’ve also seen folk use it as a jump-off point to map out and plan study, and I’ve also seen folk use it as a tool to restructure/reformat information to aid study/revision.

As someone who often teaches, I’m trying to get more nuanced than never/always use it, and more reliable than my gut feeling on ‘good’ use cases.

@Master_Squinter @jonny I’ll definitely share it here, but I’m writing it in German 🙂
@Master_Squinter @jonny Long story short: the kids didn’t directly use an AI. I collected their texts and let an extensively prompted version of Claude 3.7 write a feedback for the pupils, which I inserted into their Word files. They built upon this feedback to continue their work. Analysed by scientific criteria the feedback was mostly excellent and the kids found it to be more helpful than their teachers feedback.

@dgavin @jonny
Thanks for the summary. (I would’ve struggled with German! 😅)

That’s interesting. So effectively, the LLM works at a distance to help guide, improve and course-correct human effort. I know teachers try hard to give useful feedback, but when you’re eleven hours in, marking student 53 out of 70, quality and consistency can suffer. I guess this alleviates that bottle-neck. I suppose a person could apply this to getting objective feedback on their own work.

Thanks again🙏

@jonny I heard a quote somewhere recently that just about summed it up for me.

"AI is widening the gap between the smart and the stupid"

@jonny @ShiitakeToast

You can say what you like of course, but
this just isn't the case.

I'm a person with learning disabilities, and in an academic environment I can be weeks or months behind when a concept doesn't "click" or I find it difficult to apply.

I've worked with LLMs to explain things to me in new/novel ways tested my understanding, etc. and that's made me more able to understand material more quickly.

While learning is in part the journey of getting to knowledge, when there are assistive tools that can help people with disabilities, denying them access to them, or saying that it's not real etc is ableism just in the same way that when I was in school, I encountered teachers and students who said that if I couldn't spell a word, I didn't understand what it meant, therefore spell check meant I couldn't read.

#Ablism #LearningDisabilities

@serge

Interesting!

Maybe the difference is that you had a motivation to genuinely understand. So you were bringing your own curiosity to the LLM's various words, not only copying and pasting. And you had some other material to measure against so you'd be able to tell if you _had_ understood the ideas.

The person that @jonny's describing seems to have given up on understanding it themself.

@unchartedworlds

Essentially the LLM feeds you what you want from it.

If you want it to do all the work for you, it will (with varying levels of quality) and if you want to guide you, you may need to nudge it.

In the most recent examples, I found some concepts in learning the Elixir programming language a little fuzzy. I already know about a dozen programming languages, but each is a little different.

I specifically asked the LLM not to provide me code but to stick to concepts and I found this extremely helpful.

It meant that I could focus on the conceptual parts that were unclear.

@jonny

@serge @unchartedworlds @jonny

But this means, that you need to know, what a LLM can and can't do.

It can save a lot of time, finding the best explaination, automate summerys and analysis of large Data sets, making first concepts, if you don't have an idea to start with, Brainstorming. In this cases it is very useful.

For me it helps a lot, to get an idea of the general structure of a programming language. Or some Quick information.

But with medical advice it is very often very insufficient. I'm better with a classic search, reading multiple websites and decide, whether I should consult a doctor.

E.G. My mother in law consulted Gemini, who said, that my skin problems were more serious than they actually are. It is purely a summer phenomenon. Maybe I should see an allergologist. But I surely don't need antibiotics.

@jonny I don’t agree with this at all. 1) copy paste can absolutely be abused in this exact same manner and 2) you vastly underestimate people in general. Some people, sure but #notallpeople .. and it’s our job to educate everyone that all tools can be abused. Yes. All of them.
@codinghorror @jonny I think there's a key difference to copy and pasting from Stack Overflow: Most of the time, your case is a bit different than the one discussed on Stack Overflow. Therefore, you still need to do some transfer work, which is key for learning.
@codinghorror
Tried to address this in the edit in first post - I am describing a particular pattern I am seeing in one person I am teaching that I have seen now in several learners, not describing the case for everyone. I would edit this to read "in this case..." but I don't want to notify several hundred people.
@jonny this is why I keep my junior engineers away from it until they know how to code well enough to know why the code an LLM is spitting out is wrong and how to fix it.
@jonny (we also have never approved of Cliff's Notes. if you don't want to read a book, don't read it, but like... don't read someone else's opinion about what it says. that just teaches people to never have their own observations)
@jonny (of course, the problem that gives rise to Cliff's Notes is the choice to rank students. ah well)

@jonny I would posit a great contributor to people thinking #AI is a learning aid, is so much of documentation for programming things* is lacking working implementation. It doesn't show a complete working use of the thing; eg it shows a function, with multiple optionals written in the shorthand programmers use >$this< when using the $ sign $100 would actually break it.

*Non-programmer but needs to use PHP when scratch-building Wordpress themes every couple of years perspective.

"the LLM is not a learning aid"

It also doesn't help you at practice.
Like you need to practice your brain.
In a very trivial sense if doing crosswords helps you ward off dementia, then
an LLM will invite it. I was a professional with a high paying job and learned
LLMs including RAG and then agents very well.
@jonny I am increasingly becoming convinced that there is copious anecdata at this point—if not scientific studies coming out—showing that regular exposure to chatbot workflows degrade cognitive function. Perhaps only a little in some people with resilient brains, whereas for others it has been debilitating, essentially like a mental illness.

@jaredwhite @jonny
There is no “exposure” to any model heuristic, workflow, process. It’s an opaque, probabilistic, produce plausible, variable output box-like rolling a 1 to 30 billion sided dice to statistically provide your next word.

The experience you describe is a kind of “functioning illiteracy”, like living in a country where you don’t speak the language & making best guess at what a street sign says based on location, shape, colour-everything except comprehending the actual words.

@jaredwhite @jonny there is actually a study researching the impact of using LLMs to write essays. The TLDR is that most of the brain stays quiet while using LLMs, so there is little learning going on.

https://arxiv.org/abs/2506.08872

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

arXiv.org
@jonny
People with impostor syndrome stand back down, the real impostors are in town!
@srgesus @jonny genuinely vibe coding existing as a concept has solved imposter syndrome for me and a lot of people i know

like i may be a little bit stupid but at least i'm not that, right?
@jonny why do you need the edit when people who are hopelessly evangelical about LLMs don’t pay the same courtesy. I respect that you have found a problem and stated it plainly.

@jonny holy heaven. I just read the comments. I really dont envy you, having to deal with this amount of brainrot, in the fedi of all things.

Thanks for posting this and good luck. :)

@jonny after the robots brawndo civilization to death in some buffer-overflow fiasco, maybe some people will remember how to use a slide rule, or be able to derive how to make one. How many wrong answers could an LLM generate to the question? Meanwhile we have wikipedia https://en.wikipedia.org/wiki/Slide_rule
Slide rule - Wikipedia

@jonny I do affirm it is impossible to learn with LLM.
LLM destroys scholar practices and ethos necessary to the process of learning.
@Nausipoule @jonny
The best metaphor I've heard is that trying to learn using LLMs is like using a forklift truck to move weights in a gym.
@HighlandLawyer
@Nausipoule
that sort of implies that with the load palletized that the forklift would be a very effective and fit-for-purpose tool instead of being occasionally influenced by the preponderance of heroic escape narratives in written language, deciding that trucks are conspiring against them and to stay alive they need to exfiltrate the company's internal data to the Balkans and subtly overload every truck to wear down the axels
@jonny @Nausipoule
Oh, there'd certainly be a niche use for an employee to use a forklift to put racked weights into storage after closing. But the rest of the time anyone using it would require to have extreme skills, which most people wouldn't have, to even be able to lift anything; would cause chaos, danger, & probably injury to all the other users; and in any event would fail to get any of the fitness benefits for which users go to the gym to achieve.

But the forklift trucks ARE part of the #ForkAnon conspiracy. In this thread I shalll .... (1/2096)

@jonny @HighlandLawyer @Nausipoule

@jonny I noticed some kids/teens having this same problem in 2017/2018 when I was teaching at summer programs; there was a subset of students who just *could not* stop and think about the problem for a few seconds and couldn't seem to hold more than 2-3 words in their head. You could have them read an error message out loud and hear in their tone of voice when they stopped comprehending the sentence.

Cool to know it's only getting more common 😞

@bryce @jonny Yeah, I definitely had some masters students who were resistant to learning. They would copy and paste unrelated code and expect it to work. But after doing this for long enough, most of them eventually figured it out. Maybe that’s still possible for LLM-dependent students, I don’t know, but it’s gotta be harder.
@dx @jonny In my experience it's about the mindset. The kids who actually want to know how stuff works and have discussions about "why?" rarely have this issue. It's the ones who just care about results (be that grades, money, fame, etc.) who can't seem to let go of the "just do it for me" mentality.