I think the most tragic aspect of deploying "AI" in teaching and learning situations is how much it pushes people into a situation of learned helplessness. This constant feeling of not knowing how to do a thing of being incapable of actually doing work on one's tasks is mentally so harmful. How do people under those conditions gain confidence in their abilities? Like ever?
@tante I mean, you probably know this already, but… I don't think they're supposed to.
@tante it captures people in the first part of the Dunning Kruger graph. Artificially boosting confidence until they hit real problems that they have absolutely no chance to overcome ever.
@tante So what's odd is that there appears to be evidence that they think this *is* working on one's tasks. They are supremely confident that what they produce is sufficient.... And to a large extent they are not wrong about that. What's slightly more worrying is when they discover that's not enough. And you'd think then they would lose confidence right? Wrong! That's when they seek out a better tool. It's a remarkably interesting phenomenon.
@joannejacobs @tante The problem is: were not actually interested in the answers, but the process. I know the difference between an absolute and a constitutional monarchy, I can perfectly write an Email in English and I'm actually not even interested in their view on school uniforms. I want them to compare information, use polite forms and write a coherent argument. But they get away using an LLM like I did with copying my friend's homework: safe in the moment, utterly clueless in the exam

@Giliell @tante as an educator, I get this. But we educators need to think differently about how to facilitate the process of learning.

As you say, it's not about the answers. So if we are testing for understanding and insight, creative thinking and critical thinking, then *turn the process around*. Rather than finding answers to questions, we should be helping them to evaluate the answers available to them.

@joannejacobs @tante That's true, that's nice, but pardon me being blunt, also empty phrases that don't address the issue we're talking about because the kids are actively skipping that crucial step and refusing to engage because "Chat GPT has the answers"

@Giliell @tante It absolutely addresses the issue. When kids know to find the answers, you need to stop asking simple questions. You need to get kids to evaluate results rather than assuming the only thing they ever want is an answer.

The very fact that kids go to AI to get an answer is evidence that they don't see the value in delivering a result. Yet they will put hours into finding the right tool and evaluating those tools and their outputs. Harness that enthusiasm.

@joannejacobs @tante These kids can't read at a third grade level... And of course it doesn't address the "how do we enable them" at all. The answer to that question is always " you're the teacher, figure it out", while we're battling ever decreasing attention spans.
@Giliell @tante that's true. So let's work together, rather than against each other, to find a solution.
@joannejacobs @tante That's another one of the phrases I can't hear anymore
@Giliell @tante well if all you can do is be negative, then you are no educator.
@joannejacobs @tante Aaaaand here we are. This is exactly what I'm talking about. You use those nice phrases and when anybody points out that those nice phrases are pretty empty and not doing anything, then they're the problem and obviously a bad teacher. "Let's work together ". Tell me how. Tell me who. Tell me where the resources are coming from. Tell me the goals of the cooperation.
@Giliell @tante That's absolutely not what happened. You noted you were not prepared to do the work to produce a solution. You are at fault.
@Giliell @tante And just to remind you, I GAVE you the answer and you rejected that too. Here's an idea. Stop complaining. Stop focusing on the barriers. The fact that kids can't read at 3rd grade level is not the point. They are using the tools. They know how to use the tools. Make them use the tools.
@joannejacobs @tante No, you didn't. You said we had to enable them, but you never offered any ideas as to how to do that or resources teachers can use. I can see through that bullshit from a mile away.
Joanne Jacobs (@[email protected])

@[email protected] @[email protected] It absolutely addresses the issue. When kids know to find the answers, you need to stop asking simple questions. You need to get kids to evaluate results rather than assuming the only thing they ever want is an answer. The very fact that kids go to AI to get an answer is evidence that they don't see the value in delivering a result. Yet they will put hours into finding the right tool and evaluating those tools and their outputs. Harness that enthusiasm.

Aus.Social
@joannejacobs @tante I rejected that, because as I already said, it doesn't address the many issues that lead to this point. You are absolutely not offering anything that we don't absolutely know yet. May I ask you which grades and institutions you teach at?
@Giliell @tante you clearly reject everything because you don't want to work with anyone and you want other people to do your work. And that's ironic.
@Giliell @tante and if you bothered to do any checking, you would have looked up my education credentials.
@joannejacobs @tante Keep it up with the insults. Calling people "poor learners" is the hallmark of brilliant educators. I also didn't ask about your credentials, I asked about where you're teaching. You're also making up stuff like me "refusing to work with others" when you have offered no opportunity whatsoever. Again, this is, typical: people who call out empty phrases in a, dysfunctional system get blamed.
@Giliell @tante This domain is not helped by people like you. You place yourself above others, you criticise and condemn and choose not to take the opportunities to work with people offering alternatives. That is the very definition of a poor learner.

@tante

In the context of education specifically, it also just shows a complete disregard for understanding and knowledge having value in and of themselves. If if someone believes that "AI" is a good method for achieving correct results, that shouldn't be _enough_ to warrant using it in education.

@skjeggtroll @tante Sometimes comparisons to pocket calculators are made. Nobody misses doing logarithms by hand and a society with calculators is vastly superior to one without them.
What are good arguments against that? I think the main one is that AI products don’t replace basic things like multiplications. They replace things that deal with human communication, moral growth, creativity. Plus: you don’t own the LLM like you do a pocket calculator.
Is a neural network like a pocket calculator? "AI" and epistemic injustice

If you go to secondary school in Germany for the last 2 years you have to pick a bunch of specializations, subjects you want to focus on to a degree. You spend more time on these subjects and your final grade is strongly influenced by your results in those courses. When I picked math as […]

Smashing Frames
@tante @skjeggtroll Thanks, I didn’t know your essay about the calculator conparison yet. It’s indeed an important point. Owning the skills and knowledge. There was a kinda depressing article about automation coming for the IT workers who have enjoyed superstar status because their unique skills were in such high demand. I think it was by Doctorow.
@compfu @skjeggtroll @tante The latter is a very important point, I wrote my thoughts about that before:
https://scholar.social/@wim_v12e/113562289991941216
Wim🧮 (@[email protected])

The current crop of "AI" is of course deeply problematic for all kinds of reasons but I feel we may be missing an important point here. Suppose a corporation acquires,without stealing anyone's work, the ability to create a perfect AI: one that is not lying, is not biased, is kind and considerate and does everything that can be done via a computer better than the average human. So none of the current objections would apply any more, except the environmental one. 1/2 #NoToAI #FrugalComputing

Scholar Social

@skjeggtroll @tante this is so interesting because just an hour ago I was internally screaming about a medical problem I had for months, other women on the internet cured this problem with estrogen cream, I wanted estrogen cream and as a perimenopausal woman I figured I could get it. But OH BOY THE GATEKEEPING OMG “well there aren’t studies saying that it will help with that problem (because they never studied it because they want women stuck at home doing free labor) blah blah blah”.

So I lied to get the estrogen cream (say you have a dry vagina that inconveniences some man and the gods of healthcare will move mountains for you) and guess what? It fixed that problem I had for six months with only two applications. Like fixed, cured, gone.

When I told my doctor he launched into this long winded explanation about how the estrogen cream helped the problem. All I could think about was how ChatGPT probably would have recommended the estrogen cream had I put my symptoms into it.

Here I am fighting our AI overlords when they may be the key to ending the suffering for so many of us who get ignored by doctors because of their own personal bias.

@tante
Your message is subtle and I'm missing something. How does using AI in teaching make students helpless?

@sloanlance @tante I assume it’s similar to how I don’t remember anyone’s phone number anymore since they’re all stored in my phone.

It’s great that the RAM space in my brain is freed up for other things, and it’s really helpful for when I’m having terrible recall issues and I can’t even think of the word I need to say in my sentence . . . If I had to remember a phone number in an emergency at a time like that it could be a disaster.

But if I lost my phone I needed to call a loved one for help I’m totally helpless. I sort of remember my old best friend‘s phone number because it’s close to mine, and even though we aren’t friends she would probably help me.

But I imagine they mean something like that. How do you even evaluate sources if you don’t ever even look at original sources because ChatGPT aggregates all the research for you?

@maggiejk @sloanlance @tante using ai in education doesn't mean getting ai to just give you the answer, this is a strawman argument. ideally, ai should be fulfilling the same role a teacher would, evaluating how the student understands the problem and helping them arrive to the solution themselves. only exception is, unlike human educators, ais have practically infinite patience and can attend to every bit of information provided to them by developers (arguably, that last part may not be true now for every model/system, but it is an area where a lot of work is being done). it is just the question of implementation
AI is Like Hyperprocessed Food for Learning | Blindside Networks

@tante Yes - training wheels on a bicycle is a better analogy for AI in education, than calculators. Training wheels don't help you balance, they balance for you, and so stop you from gaining the exact thing you need - the confidence in your own ability to balance without help.
@Zumbador @tante That's a great analogy 👍

@tante I have a single mom friend who wouldn’t apply for food stamps because when she asked ChatGPT the income limits it just gave her the income limits, and she exceeds that.

It didn’t tell her that if she pays for heat they deduct $400 or more, if her rent counts as excess living expense they deduct whatever the excess is that she’s paying, it didn’t tell her that since her child has one of those IEP education plans she can deduct a whole bunch of expenses related to disability or healthcare or education.

So she was like I can’t apply for food stamps because my income exceeds, lady you can apply and they’ll walk you through the deductions that may reduce your income to make you qualify.

But I suppose our government loves this use and will encourage it because it leads people to believe there’s no point in applying.

@tante You’re right — when AI replaces real connection, it can fuel learned helplessness.

Confidence isn’t built through perfect answers. It’s built through being seen, supported, and challenged by other humans.

AI can assist, but it can’t replace the power of a teacher saying, “I see your effort — keep going.”

If we want AI to help, we need to design it to strengthen human connection, not replace it.

But easier said than done imo.

@tante AI is a research tool. ALL learning is research.
@JeraldBlackstockArt @tante could you explain why you define it using the word research? How is it a research tool?
@foundseed High-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art) https://en.wikipedia.org/wiki/Artificial_intelligence
Artificial intelligence - Wikipedia

@JeraldBlackstockArt you provided like one and a half research tools on that list. AI "art" isn't a research tool. Neither is the YouTube fash recommendation pipeline. That one's almost an anti-research tool.
@foundseed "AI "art" isn't a research tool. " All art is research. As for the others things a tool is only as good as the person using it for their particular satisfaction.
@tante @buherator reminds me a bit of GPS navigators TBH
@raptor @tante I can confidently get lost with those too thank you very much!
@tante Maybe ask PhD students and postdocs.
@tante Writing. They can't write a coherent sentence themselves, which was already a problem before LLMs. Also they can't find actual information. They just ask their preferred chat bot and then run with what it says because they can't even look it up on Wikipedia.
@tante It is why I still prefer oils and watercolor painting to my Apple Pencil and iPad. There is great satisfaction in the old time consuming ways.

@tante

"I used AI to....", is nothing more than, "Listen I'm not an asshole but....", for the 21st Century.

#FuckAI

OK Boomer.

They said the same thing about writing things down.

@tante the vast chasm between educated and merely schooled.

@tante I would argue that it can give people a false sense of confidence which can both be a bad and good - depending on risk and whether the confidence was actually justified.

Aka. You sparring an idea or thought you had. And you wanna check if you have reflected enough over your options. Same as when you go on Google, or to the Library to explore your options.

LLM can practical be an over glorified brainstorming machine.

It really depends on how people reflect on the information given.

@tante They don't. That's partly why we're a nation of insecure racists who don't like it when OTHER people are confident and capable.

@tante

All significant new technologies not only influence society in profound ways, they actually alter the way humans think. It is very difficult to anticipate these changes or even attempt to guide them. We are kind of just along for the ride, and the best we can do is to try to protect ourselves from the worst potentialities.

@tante

I have not seriously used "AI" for writing a paper or writing code. Being the generalist I am, I have dabbled broadly.

From my observations of others using "AI", I have seen good results where there is a strong understanding of what the goal is what steps are needed along the way, comparing results from different models and presenting the results that meet the goal.

in a teaching situation, there of course needs to be a good foundational knowledge among the teachers themselves. The other thing that needs to be encouraged is to having the classroom resources tro run models locally. Like all intensive computing, understanding efficiency is vital, Having more tokens in the model does not guarantee a significantly better result. I heard there is work on running models on a phone, so it should be doable as long as the goals are set appropriately.