“Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.”
https://www.media.mit.edu/publications/your-brain-on-chatgpt/
“Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.”
https://www.media.mit.edu/publications/your-brain-on-chatgpt/
Microsoft researchers say an overdependency on AI tools like Copilot negatively impacts people's critical thinking capabilities.
https://www.windowscentral.com/software-apps/copilot-and-chatgpt-makes-you-dumb-new-microsoft-study
Microsoft researchers say an overdependency on AI tools like Copilot negatively impacts people's critical thinking capabilities.
@masek
And that page in that browser window is served is served from a fossil fuel-powered, dinking water-guzzling DC near you.
@lukerufkahr
The Internet is occasionally beneficially and runs on more or less efficient infrastructure.
The slop extruders run inefficient algorithms and add up to that cost exponentially, providing no benefits to the general society.
@dzwiedziu @lukerufkahr @masek @GossiTheDog this. Traditional general purpose compute is pretty damn efficient at this point. AI simply is not.
Just SWAG the wattage needed to serve an old fashioned Google search query, then the wattage for a Gemini AI inference response to the same query. At least an order of magnitude.
And how many Google queries per day globally?
🌎🔫
@jadedtwin Tha AI is running on Mac Mini M2 Pro at home. So I can benchmark exactly what it needs. A text query is usually about 300 Ws of electricity, there is no water involved.
About 85% of my current electricity usage is self-produced by my photovoltaics.
It doesn't help you making a good argument by proclaiming the other side an idiot that doesn't understand what he is doing.
@masek @jadedtwin @GossiTheDog but what about the training costs, they ask as a large percentage of the world is bombed into oblivion and tankers are set on fire.
Thank fuck we recycle and scold each other onlne, changing the world by making ourselves miserable one toot at a time.
@camelCaseNick @jadedtwin @GossiTheDog Nope: the wattage increases by 5-10W during the query which takes about 30s. The numbers reported in the press are very likely inaccurate.
I use ollama on a Mac Mini M2 Pro.
But it won't ;-). You'll be reading and it will rewire your Brain...
I (also) was only half serious. But I also was only half joking, too.
"The Shallows: What the Internet Is Doing to Our Brains" by Nicolas Carr has some interesting ideas on (a) the cultural role of reading and writing and (b) the constant rewiring occurring in human brains.
We know that LLMs suffer model collapse when trained with the output of other LLMs. Given those two data points --- humans cannot firewall input effectively, their brains costantly rewire, and that the output of LLMs is somehow demonstrably different from text created by humans, even if we don't understand why and how --- I am somewhat weary of letting LLM output into my brain.
I know, this sounds radical, but haven't all experienced already how bad code or badly written copy/text by others scramble our brains so we're either exhausted or find ourselves incapable of producing good text ourselves for some time.
I fear a similar effect, only more insidious and permanent. Those two papers seem to confirm my fears.
@GossiTheDog it feels like the equivalent to hand-holding. A decade ago when I first tried to mentor someone, I noticed that offering quick replies to problems they could figure out on their own, made them stop thinking.
Ignoring them for 15-30 minutes before replying usually resulted in a:
"never mind, I figured it out".
Feels the same when using Copilot. I tend to disable it.
@gabriel @tiotasram @GossiTheDog
A maxim teachers sometimes use: “the minimum intervention to get them unstuck”
(Not always the right advice, but it is more often than our natural inclinations would say it is!)
@inthehands @gabriel @GossiTheDog yup.
The Socratic method is great for avoiding over-help too.
@GossiTheDog Same situation (but we're two) and we have sent those same docs + the Apple and the recent EchoLeak vuln.
Perceived as negative..
As will I, from now on.
@jb @GossiTheDog The paper covers it. They had the brain-only group use ChatGPT in a subsequent session, and noticed no significant change in neural activity.
So if you do the work first, then the LLM has no adverse effects. But if you've already done the work, then what's even the point of using it?
But that’s not even usage, then. You’ve solved the problem already.
@sawaba @GossiTheDog To be fair, the same can be said of any tool like GPS making you bad at orienting yourself or Stack overflow making you lazy.
Being uncomfortable and struggling is part of problem solving and one should put themselves in that position to stay fresh.
@dufresnetech @sawaba @GossiTheDog
So true. I used a GPS for months on my commute to school only to not remember how to get there.
The moment I printed out instructions, could I begin to recall my trip
I think mapquest is making a comeback :P
@dalias @sawaba @GossiTheDog
I suspect this depends on whether one uses a search engine to look up facts and forget them until the next time you need a search engine to “remember” them
Versus using a search engine as if you were using a really big card catalogue to look up resources you want to add to your research
Like I have a huge database on my computer of files and PDFs that I looked up using search engines
I’m not sure how else I would have acquired them
That’s how I wrote my book
@dalias @sawaba @GossiTheDog
All that computer stuff may not have expanded my memory
If I had huge paper filing cabinets it would last longer if the digital apocalypse happens
But I assume people on LLMs not just for searching but for writing papers and such
That seems like it would be terrible for learning
If I was a rich parent at a #AI company, I would send my kids to a school that didn’t allow phones or computers with this stuff on it