“Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.”

https://www.media.mit.edu/publications/your-brain-on-chatgpt/

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task – MIT Media Lab

This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and …

MIT Media Lab

Microsoft researchers say an overdependency on AI tools like Copilot negatively impacts people's critical thinking capabilities.

https://www.windowscentral.com/software-apps/copilot-and-chatgpt-makes-you-dumb-new-microsoft-study

Will an overreliance on Copilot and ChatGPT make you dumb? A new Microsoft study says AI 'atrophies' critical thinking: "I already feel like I have lost some brain cells."

Microsoft researchers say an overdependency on AI tools like Copilot negatively impacts people's critical thinking capabilities.

Windows Central
I’m refusing to on board to Copilot at work. I’m the only person. When asked why I sent them the MIT and Microsoft research papers as openers.
@GossiTheDog AI is OK for me as long it stays in a Browser Window.
@masek @GossiTheDog that isn't necessarily meaningful given the constant attempts by Google to further enshittify the Web by pushing intrusive, unfree technologies. As long as people are using Chrome, it's only a matter of time until whatever you think is safer in a browser will be just as intrusive if not more than a standalone software application.

@masek
And that page in that browser window is served is served from a fossil fuel-powered, dinking water-guzzling DC near you.

@GossiTheDog

@dzwiedziu @masek @GossiTheDog and so is all the internet traffic ever. what is your point?

@lukerufkahr
The Internet is occasionally beneficially and runs on more or less efficient infrastructure.

The slop extruders run inefficient algorithms and add up to that cost exponentially, providing no benefits to the general society.

@masek @GossiTheDog

@masek @GossiTheDog it’s ok with me as long as it stays verbal. As an intangible prompt it works well to point you in a new direction. The output is too inconsistent for a physical form, that makes it too easy to mistake for information. Outside of that it’s the friend that everyone knows is a liar. Crazy stories at the bar but I’d never work with the dude.
@masek @GossiTheDog lemme correct that for you because you misspelled "AI is okay for me because it's actively killing the planet and using up valuable fresh water and I'm fine with the dehumanization of life".

@jadedtwin Tha AI is running on Mac Mini M2 Pro at home. So I can benchmark exactly what it needs. A text query is usually about 300 Ws of electricity, there is no water involved.

About 85% of my current electricity usage is self-produced by my photovoltaics.

It doesn't help you making a good argument by proclaiming the other side an idiot that doesn't understand what he is doing.

@GossiTheDog

@masek @jadedtwin @GossiTheDog but what about the training costs, they ask as a large percentage of the world is bombed into oblivion and tankers are set on fire.

Thank fuck we recycle and scold each other onlne, changing the world by making ourselves miserable one toot at a time.

@masek @jadedtwin @GossiTheDog As those numbers would be interesting, indeed. Are you sure you mean energy consumption of 300 Ws (Watt seconds = Joule), because 300 J ≈ 0.083 Wh. That would be extremely low. That's only about 3 % of the estimated 2.5 Wh per prompt to ChatGPT. Maybe you miscalculated it?

@camelCaseNick @jadedtwin @GossiTheDog Nope: the wattage increases by 5-10W during the query which takes about 30s. The numbers reported in the press are very likely inaccurate.

I use ollama on a Mac Mini M2 Pro.

@masek i think those media reports probably factor in training data. Also i wouldn't be surprised if chatgpt consumed more power than local models on Apple due to the model size and gpu architecture
@masek self-delusion is a dangerous slippery slope. "I won't get addicted to heroin" says every heroin addict in the beginning
@jonahgibberish I am 60 now. I think I don't have the energy for a new addiction any more 😄
@GossiTheDog I gave it the benefit of the doubt. It sucked at the first questions I asked it, ie generated code that didn't compile, that was impossible even according to the documentation. All other reasons aside that alone did it for me.

@GossiTheDog it feels like the equivalent to hand-holding. A decade ago when I first tried to mentor someone, I noticed that offering quick replies to problems they could figure out on their own, made them stop thinking.

Ignoring them for 15-30 minutes before replying usually resulted in a:

"never mind, I figured it out".

Feels the same when using Copilot. I tend to disable it.

@gabriel @tiotasram @GossiTheDog
A maxim teachers sometimes use: “the minimum intervention to get them unstuck”

(Not always the right advice, but it is more often than our natural inclinations would say it is!)

@inthehands @gabriel @GossiTheDog yup.

The Socratic method is great for avoiding over-help too.

@inthehands @tiotasram @GossiTheDog yes. I only give prompt answers when I know it's something that they can't know in their current learning curve. But things that have been explained already that just need to be put together to form a solution, I tend to leave them to it for a while before asking some guiding questions. And eventually help unblock if they are still stuck.

@GossiTheDog Same situation (but we're two) and we have sent those same docs + the Apple and the recent EchoLeak vuln.

Perceived as negative..

@GossiTheDog I woke up and logged into work and magically had it enabled and added to a teams group. I have to refrain from shit posting.
@GossiTheDog
Think I've used an LLM for actual work purposes twice so far. Most of the time just use it to see if the hidden prompt I add to documents works well enough to mess with anyone trying to feed them to the system...
@GossiTheDog what does "on boarding" to copilot mean here? Enabling it or usage or...?
@GossiTheDog Call me cynical, but I think lots of companies would be just fine if AI leads to individual skill attrition in the long term. They could reap, as they see it, short-term productivity gains at the cost of "expending" a human resource that they can then lay off. Not all companies, but many.
@GossiTheDog hi, could you send me the papers you mentionned, so i can also promote awareness 😉
@GossiTheDog so, if over-reliance makes you dumb, what does simple usage make you?

@jb @GossiTheDog The paper covers it. They had the brain-only group use ChatGPT in a subsequent session, and noticed no significant change in neural activity.

So if you do the work first, then the LLM has no adverse effects. But if you've already done the work, then what's even the point of using it?

@richarddegenne @GossiTheDog

But that’s not even usage, then. You’ve solved the problem already.

@jb @GossiTheDog Then your definition of "usage" is already over-reliance.

Anything the LLM does, you don't, so you don't benefit from the mental exercise of doing it.

@richarddegenne @GossiTheDog what is usage, then?

Is my usage of “ping” leveraging the tool, or am I just too damn lazy to build my own socket and packet?

What is “proper usage” of an LLM or LRM? If any amount of it is bad, there is no usage at all, just over reliance.

@GossiTheDog
@nadia_z
Research from Captain Obvious?
Who else would use tools like Copilot?
@GossiTheDog @nadia_z
Sorry for that spontaneous bad habit style of in-questioning of serious studies.
But I cannot imagine that they could make studies on groups of persons, which are not biased in pro or contra LLM-habits.
@Nowhereman @GossiTheDog @nadia_z I’m a little rusty, but I could whip the shit out of an LLM at Contra.
@GossiTheDog
Isn't that an intentional side effect? 😎
@GossiTheDog but Microsoft continues to push copilot regardless.
@GossiTheDog that is exactly intentional als installing wannabe-"#AI" as 'new clergy' / 'oracles' is desireable for #Cyberfacists!
@GossiTheDog overdependence on search engines has similar results for the same reasons

@sawaba @GossiTheDog To be fair, the same can be said of any tool like GPS making you bad at orienting yourself or Stack overflow making you lazy.

Being uncomfortable and struggling is part of problem solving and one should put themselves in that position to stay fresh.

@dufresnetech @sawaba @GossiTheDog

So true. I used a GPS for months on my commute to school only to not remember how to get there.

The moment I printed out instructions, could I begin to recall my trip

I think mapquest is making a comeback :P

@sawaba @GossiTheDog Strong disagree. Use of search engines is research as it's supposed to work, and builds cognitive capability. It might reduce rote memorization capability, on the basis that you're conditioned to know you can find basic facts when needed.

@dalias @sawaba @GossiTheDog
I suspect this depends on whether one uses a search engine to look up facts and forget them until the next time you need a search engine to “remember” them

Versus using a search engine as if you were using a really big card catalogue to look up resources you want to add to your research

Like I have a huge database on my computer of files and PDFs that I looked up using search engines

I’m not sure how else I would have acquired them

That’s how I wrote my book

@dalias @sawaba @GossiTheDog
All that computer stuff may not have expanded my memory

If I had huge paper filing cabinets it would last longer if the digital apocalypse happens

But I assume people on LLMs not just for searching but for writing papers and such

That seems like it would be terrible for learning

If I was a rich parent at a #AI company, I would send my kids to a school that didn’t allow phones or computers with this stuff on it

@GossiTheDog pretends to be surprised by the results of the research.

I'm seeing in real time the decline of a wonderful engineer depending on chat gpt and copilot to socialize, program and write documentation. I fear for the future.

She Won. They Didn't Just Change the Machines. They Rewired the Election.

How Leonard Leo's 2021 sale of an electronics firm enabled tech giants to subvert the 2024 election.

@GossiTheDog Microsoft CEO on the other hand, insists the researchers are stupid and have no idea what they're doing. It works just fine for him and his critical thinking skills are sharper than ever.

Have you tried new Windows 11 Ad-Supported Edition?

@GossiTheDog they just figured that out now?! 😂

@GossiTheDog

I want to have study done on users of GenAI tools, who take the time to validate outputs and steer them into accurate responses.
Key metrics should include:
- Coffee consumption
- Headaches experienced
- Time spent pacing in anger
- ‘Travis Bickle mirror’ moments

@GossiTheDog is this the research they tried to hide when the results came out that didn't support their push for AI?
@GossiTheDog "Overdependency" bs. *Any use at all* has severe negative cognitive impacts.
@GossiTheDog You know what's convienient about "overdependency"? It suggests a safe dose exists, but convieniently doesn't have to communicate what that dose is.

@GossiTheDog No duh - I can't even remember phone numbers or street addresses.

I'm not going to make myself even dumber by using "AI" :-)

@GossiTheDog ah too bad it's "just" a self-reported effect for now.

These preliminary findings should give enough reason for concern though, that funding for real randomized controlled trials with large N should be made available ASAP.

@GossiTheDog I feel like people who've been paying attention already knew this at some level, but it's nice that there's papers coming out slowly that corroborate this.
@GossiTheDog I‘m not really surprised. Still great to have some proper scientific research about this now.

@GossiTheDog I can feel this myself, and have seen it in others.

As with similar things in life, I've felt that internal sense of "I don't need to keep this in my head now something external can be used" - maybe an extreme continuation of the "I can just google that" phenomenon.

When you're outsourcing the thinking for your main source of income, that becomes a scarier thing to start doing.