The educator panic over AI is real, and rational.
I've been there myself. The difference is I moved past denial to a more pragmatic question: since AI regulation seems unlikely (with both camps refusing to engage), how do we actually work with these systems?

The "AI will kill critical thinking" crowd has a point, but they're missing context.
Critical reasoning wasn't exactly thriving before AI arrived: just look around. The real question isn't whether AI threatens thinking skills, but whether we can leverage it the same way we leverage other cognitive tools.

We don't hunt our own food or walk everywhere anymore.
We use supermarkets and cars. Most of us Google instead of visiting libraries. Each tool trade-off changed how we think and what skills matter. AI is the next step in this progression, if we're smart about it.

The key is learning to think with AI rather than being replaced by it.
That means understanding both its capabilities and our irreplaceable human advantages.

1/3

#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy

AI isn't going anywhere. Time to get strategic:
Instead of mourning lost critical thinking skills, let's build on them through cognitive delegation—using AI as a thinking partner, not a replacement.

This isn't some Silicon Valley fantasy:
Three decades of cognitive research already mapped out how this works:

Cognitive Load Theory:
Our brains can only juggle so much at once. Let AI handle the grunt work while you focus on making meaningful connections.

Distributed Cognition:
Naval crews don't navigate with individual genius—they spread thinking across people, instruments, and procedures. AI becomes another crew member in your cognitive system.

Zone of Proximal Development
We learn best with expert guidance bridging what we can't quite do alone. AI can serve as that "more knowledgeable other" (though it's still early days).
The table below shows what this looks like in practice:

2/3

#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy

Critical reasoning vs Cognitive Delegation

Old School Focus:

Building internal cognitive capabilities and managing cognitive load independently.

Cognitive Delegation Focus:

Orchestrating distributed cognitive systems while maintaining quality control over AI-augmented processes.

We can still go for a jog or go hunt our own deer, but for reaching the stars we, the Apes do what Apes do best: Use tools to build on our cognitive abilities. AI is a tool.

3/3

#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy

Before you continue to YouTube

LLM get wronger the more they talk to people arxiv.org/abs/2505.06120 There’s something fundamentally broken about both #AI #metacognition and its #ActiveListening capability

LLMs Get Lost In Multi-Turn Co...
LLMs Get Lost In Multi-Turn Conversation

Large Language Models (LLMs) are conversational interfaces. As such, LLMs have the potential to assist their users not only when they can fully specify the task at hand, but also to help them define, explore, and refine what they need through multi-turn conversational exchange. Although analysis of LLM conversation logs has confirmed that underspecification occurs frequently in user instructions, LLM evaluation has predominantly focused on the single-turn, fully-specified instruction setting. In this work, we perform large-scale simulation experiments to compare LLM performance in single- and multi-turn settings. Our experiments confirm that all the top open- and closed-weight LLMs we test exhibit significantly lower performance in multi-turn conversations than single-turn, with an average drop of 39% across six generation tasks. Analysis of 200,000+ simulated conversations decomposes the performance degradation into two components: a minor loss in aptitude and a significant increase in unreliability. We find that LLMs often make assumptions in early turns and prematurely attempt to generate final solutions, on which they overly rely. In simpler terms, we discover that *when LLMs take a wrong turn in a conversation, they get lost and do not recover*.

arXiv.org
ℹ️Event in Tübingen with two of our researchers: Dr. Helen Fischer will be talking about "Being Right vs. Knowing When You're Not" and Dr. Jürgen Buder about "Artificial Intelligence and Human Intelligence: What are the Differences?". 🗓️ Mo, 19.05.2025
https://pintofscience.de/event/beautiful-mind-2 #wisskomm #PintofScience #sciencecommunication #science #wissenschaftskommunikation #ai #psychology #metacognition
Mind, Machines & Misconceptions

Explore your own mind while you sip your pint! How do we know when we’re wrong about politically charged topics like climate change and COVID-19? Can yoga ...

Pint of Science Deutschland

Was ist weise KI? Entdecke, warum Metakognition der Schlüssel zur nächsten KI-Generation ist!

KI, die sich selbst reflektiert
Mehr Sicherheit und Anpassung
Die Zukunft intelligenter Systeme

#ai #ki #artificialintelligence #metacognition #AGI #KIWeisheit

Jetzt LIKEN, teilen, LESEN und FOLGEN! Schreib uns in den Kommentaren!

https://kinews24.de/ki-weisheit-metakognition/

KI Weisheit & Metakognition: Warum die nächste KI Weise ist

KI Weisheit & Metakognition: Erfahre, warum aktuelle KI nicht nur schlau, sondern weise sein muss und wie Metakognition zu robusterer & sicherer KI führt. Update 2025!

KINEWS24.de

What is it like to be you?

In 1974, in a landmark paper, Thomas Nagel asks what it’s like to be a bat. He argues that we can never know. I’ve expressed my skepticism about the phrase “what it’s like” or “something it is like” before, and that skepticism still stands. I think a lot of people nod at it, seeing it as self explanatory, while holding disparate views about what it actually means.

As a functionalist and physicalist, I don’t think there are any barriers in principle to us learning about the experience of bats. So in that sense, I think Nagel was wrong. But he was right in a different sense. We can never have the experience of being a bat.

We might imagine hooking up our brain to a bat’s and doing some kind of mind meld, but the best we could ever hope for would be to have the experience of a combined person and bat. Even if we somehow transformed ourselves into a bat, we would then just be a bat, with no memory of our human desire to have a bat’s experience. We can’t take on a bat’s experience, with all its unique capabilities and limitations, while remaining us.

But the situation is even more difficult than that. The engineers hooking up our brain to a bat’s would have to make a lot of implementation decisions. What parts of the bat’s brain are connected to what parts of ours? Is any translation in the signaling necessary? What if several approaches are possible to give us the impression of accessing the bat’s brain? Is there any fact of the matter on which would be “the right one”?

Ultimately the connection between our brain and the bats would be a communication mechanism. We could never bypass that mechanism to get to the “real experience” of the bat, just as we can never bypass the communication we receive from each other when we discuss our mental states.

Getting back to possible meanings of WIL (what it’s like), Nagel makes an interesting clarification in his 1974 paper (emphasis added):

But fundamentally an organism has conscious mental states if and only if there is something that it is to be that organism—something it is like for the organism.

This seems like a crucial stipulation. It is like something to be a rock. It’s like other rocks, particularly of the same type. But it’s not like anything for the rock. (At least for those of us who aren’t panpsychists.) This implies an assumption of some degree of metacognition, of introspection, of self reflection. The rock has overall-WIL, but no reflective-WIL.

Are we sure bats have reflective-WIL? Maybe it isn’t like anything to be a bat for the bat itself.

There is evidence for metacognition in mammals and birds, including rats. The evidence is limited and subject to alternate interpretations. Do these animals display uncertainty because they understand how limited their knowledge is? Or because they’re just uncertain? The evidence seems more conclusive in primates, mainly because the tests can be sophisticated enough to more thoroughly isolate metacognitive abilities.

It seems reasonable to conclude that if bats (flying rats) do have metacognition, it’s much more limited than what exists in primates, much less humans. Still, that would give them reflective-WIL. It seems like their reflective-WIL would be a tiny subset of their overall-WIL, perhaps a very fragmented one.

Strangely enough, in the scenario where we connected our brain to a bat’s, it might actually allow us to experience more of their overall-WIL than what they themselves are capable of. Yes, it would be subject to the limitations I discussed above. But then a bat’s access to its overall-WIL would be subject to similar implementation limitations, just with the “decisions” made by evolution rather than engineers.

These mechanisms would have evolved, not to provide the bat with the most complete picture of its overall-WIL, but with whatever enhances its survival and genetic legacy. Maybe it needs to be able to judge how good its echolocation image is for particular terrain before deciding to fly in that direction. That assessment needs to be accurate enough to make sure it doesn’t fly into a wall or other hazards, but not enough to give it an accurate model of its own mental operations.

Just like in the case of the brain link, bats have no way to bypass the mechanisms that provide their limited reflective-WIL. The parts of their brain that process reflective-WIL would be all they know of their overall-WIL. At least unless we imagine that bats have some special non-physical acquaintance with their overall-WIL. But on what grounds should we assume that?

We could try taking the brain interface discussed above and looping it back to the bat. Maybe we could use it to expand their self reflection, by reflecting the brain interface signal back to them. Of course, their brain wouldn’t have evolved to handle the extra information, so it likely wouldn’t be effective unless we gave them additional enhancements. But now we’re talking about upgrading the bat’s intelligence, “uplifting” them to use David Brin’s term.

What about us? Our introspective abilities are much more developed than anything a bat might have. It’s much more comprehensive and recursive, in the sense that we not only can think about our thinking, but think about the thinking about our thinking. And if you understood the previous sentence, then you can think about your thinking of your thinking of….well, hopefully you get the picture.

Still, if our ability to reflect is also composed of mechanisms, then we’re subject to the same “implementation decisions” evolution had to make as our introspection evolved, some of which were likely inherited from our rat-like ancestors. In other words, we have good reason to view it as something that evolved to be effective rather than necessarily accurate, mechanisms we are no more able to bypass than the bat can for theirs.

Put another way, our reflective-WIL is also a small subset of our overall-WIL. Aside from what third person observation can tell us, all we know about overall-WIL is what gets revealed in reflective-WIL.

Of course, many people assume that now we’re definitely talking about something non-physical, something that allows us to have more direct access to our overall-WIL, that our reflective-WIL accurately reflects at least some portion of our overall-WIL. But again, on what basis would we make that assumption? Because reflective-WIL seems like the whole show? How would we expect it to be different if it weren’t the whole show?

Put yet another way, the limitation Nagel identifies in our ability to access a bat’s experience seems similar to the limitation we have accessing our own. Any difference seems like just a matter of degree.

What do you think? Are there reasons to think our access to our own states is more reliable than I’m seeing here? Aside from third party observation, how can we test that reliability?

#Consciousness #introspection #metacognition #phenomenalConsciousness #Philosophy #PhilosophyOfMind

Dr Peter Sjöstedt-Hughes

Philosopher of Mind and Metaphysics

Dr Peter Sjöstedt-Hughes

La méditation 3/6 — SCRIPT #2

https://skeptikon.fr/w/xshiaHE9LFRfu38kfgN964

La méditation 3/6 — SCRIPT #2

PeerTube
Not professional #linguist I wonder if there has been new #metacognition trends in languages #acquisition that has proven sound according to you, and that would go in another direction than i+1's learning easy = obvious inference + 1 complexity detail any advice by expert in #linguistics is welcome
Bluesky

Bluesky Social