I am begging AI researchers trying to study human impact to get very rapidly better at methodology so I don't constantly read halfway through these papers only to find some ridiculous experiment design that will throw the conclusions into the air.
I've been burned so many times; I've learned my lesson. You really need to read each of these things carefully if you want to understand what the researchers are concluding. Reading a news article—even worse, just the headline—is at best no information, at worst disinformation.

The paper in question today is one from an Ars article that I won't link to prevent hype.

But reading this thing is a journey. From inventing a new classification of cognition to entirely abstract experiment design for "Brain only" and "AI Use" control/experimental groups, the conclusions can't be taken seriously. They feel "truthy," but that's all they can be

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646

@mttaggart ugh thanks for saving me some time

@mttaggart I mean... the base assumption in the tri-system seems... unlikely to hold imo. Just from the abstract: "System 3 [AI use] can supplement or supplant internal processes, introducing novel cognitive pathways."

That seems like a very generous interpretation of LLM use, that places it in a special category outside of other tool use, which is already covered by systems 1 and 2...

@nielsa Yeah I didn't buy offloading as a novel pathway on spec.

@mttaggart I didn't read the paper, but the moment the Ars article mentioned "fluid IQ" is when I thought "oh no, this is gonna be hogwash huh?".

Thanks for reading it and confirming 😮‍💨

@mttaggart the study should have included System 4 "phone a friend" and System 5 "ask the audience"

@mttaggart it is very difficult to go above the hype and blatant disinformation on the subject. Have you listened to this podcast?

The authors are great.

https://www.dair-institute.org/maiht3k/

The Mystery AI Hype Theater 3000 Podcast

Our biweekly podcast deflates AI hype and draws attention to the real harms of the automation technologies we call "artificial intelligence".

DAIR (Distributed AI Research Institute)
@wtrmt Yep, I'm a DAIR superfan.