@Giliell @tante as an educator, I get this. But we educators need to think differently about how to facilitate the process of learning.
As you say, it's not about the answers. So if we are testing for understanding and insight, creative thinking and critical thinking, then *turn the process around*. Rather than finding answers to questions, we should be helping them to evaluate the answers available to them.
@Giliell @tante It absolutely addresses the issue. When kids know to find the answers, you need to stop asking simple questions. You need to get kids to evaluate results rather than assuming the only thing they ever want is an answer.
The very fact that kids go to AI to get an answer is evidence that they don't see the value in delivering a result. Yet they will put hours into finding the right tool and evaluating those tools and their outputs. Harness that enthusiasm.
@[email protected] @[email protected] It absolutely addresses the issue. When kids know to find the answers, you need to stop asking simple questions. You need to get kids to evaluate results rather than assuming the only thing they ever want is an answer. The very fact that kids go to AI to get an answer is evidence that they don't see the value in delivering a result. Yet they will put hours into finding the right tool and evaluating those tools and their outputs. Harness that enthusiasm.
In the context of education specifically, it also just shows a complete disregard for understanding and knowledge having value in and of themselves. If if someone believes that "AI" is a good method for achieving correct results, that shouldn't be _enough_ to warrant using it in education.
If you go to secondary school in Germany for the last 2 years you have to pick a bunch of specializations, subjects you want to focus on to a degree. You spend more time on these subjects and your final grade is strongly influenced by your results in those courses. When I picked math as […]
The current crop of "AI" is of course deeply problematic for all kinds of reasons but I feel we may be missing an important point here. Suppose a corporation acquires,without stealing anyone's work, the ability to create a perfect AI: one that is not lying, is not biased, is kind and considerate and does everything that can be done via a computer better than the average human. So none of the current objections would apply any more, except the environmental one. 1/2 #NoToAI #FrugalComputing
@skjeggtroll @tante this is so interesting because just an hour ago I was internally screaming about a medical problem I had for months, other women on the internet cured this problem with estrogen cream, I wanted estrogen cream and as a perimenopausal woman I figured I could get it. But OH BOY THE GATEKEEPING OMG “well there aren’t studies saying that it will help with that problem (because they never studied it because they want women stuck at home doing free labor) blah blah blah”.
So I lied to get the estrogen cream (say you have a dry vagina that inconveniences some man and the gods of healthcare will move mountains for you) and guess what? It fixed that problem I had for six months with only two applications. Like fixed, cured, gone.
When I told my doctor he launched into this long winded explanation about how the estrogen cream helped the problem. All I could think about was how ChatGPT probably would have recommended the estrogen cream had I put my symptoms into it.
Here I am fighting our AI overlords when they may be the key to ending the suffering for so many of us who get ignored by doctors because of their own personal bias.
@sloanlance @tante I assume it’s similar to how I don’t remember anyone’s phone number anymore since they’re all stored in my phone.
It’s great that the RAM space in my brain is freed up for other things, and it’s really helpful for when I’m having terrible recall issues and I can’t even think of the word I need to say in my sentence . . . If I had to remember a phone number in an emergency at a time like that it could be a disaster.
But if I lost my phone I needed to call a loved one for help I’m totally helpless. I sort of remember my old best friend‘s phone number because it’s close to mine, and even though we aren’t friends she would probably help me.
But I imagine they mean something like that. How do you even evaluate sources if you don’t ever even look at original sources because ChatGPT aggregates all the research for you?
@tante I have a single mom friend who wouldn’t apply for food stamps because when she asked ChatGPT the income limits it just gave her the income limits, and she exceeds that.
It didn’t tell her that if she pays for heat they deduct $400 or more, if her rent counts as excess living expense they deduct whatever the excess is that she’s paying, it didn’t tell her that since her child has one of those IEP education plans she can deduct a whole bunch of expenses related to disability or healthcare or education.
So she was like I can’t apply for food stamps because my income exceeds, lady you can apply and they’ll walk you through the deductions that may reduce your income to make you qualify.
But I suppose our government loves this use and will encourage it because it leads people to believe there’s no point in applying.
@tante You’re right — when AI replaces real connection, it can fuel learned helplessness.
Confidence isn’t built through perfect answers. It’s built through being seen, supported, and challenged by other humans.
AI can assist, but it can’t replace the power of a teacher saying, “I see your effort — keep going.”
If we want AI to help, we need to design it to strengthen human connection, not replace it.
But easier said than done imo.
OK Boomer.
They said the same thing about writing things down.
@tante I would argue that it can give people a false sense of confidence which can both be a bad and good - depending on risk and whether the confidence was actually justified.
Aka. You sparring an idea or thought you had. And you wanna check if you have reflected enough over your options. Same as when you go on Google, or to the Library to explore your options.
LLM can practical be an over glorified brainstorming machine.
It really depends on how people reflect on the information given.
All significant new technologies not only influence society in profound ways, they actually alter the way humans think. It is very difficult to anticipate these changes or even attempt to guide them. We are kind of just along for the ride, and the best we can do is to try to protect ourselves from the worst potentialities.
I have not seriously used "AI" for writing a paper or writing code. Being the generalist I am, I have dabbled broadly.
From my observations of others using "AI", I have seen good results where there is a strong understanding of what the goal is what steps are needed along the way, comparing results from different models and presenting the results that meet the goal.
in a teaching situation, there of course needs to be a good foundational knowledge among the teachers themselves. The other thing that needs to be encouraged is to having the classroom resources tro run models locally. Like all intensive computing, understanding efficiency is vital, Having more tokens in the model does not guarantee a significantly better result. I heard there is work on running models on a phone, so it should be doable as long as the goals are set appropriately.