I am begging AI researchers trying to study human impact to get very rapidly better at methodology so I don't constantly read halfway through these papers only to find some ridiculous experiment design that will throw the conclusions into the air.
I've been burned so many times; I've learned my lesson. You really need to read each of these things carefully if you want to understand what the researchers are concluding. Reading a news article—even worse, just the headline—is at best no information, at worst disinformation.

The paper in question today is one from an Ars article that I won't link to prevent hype.

But reading this thing is a journey. From inventing a new classification of cognition to entirely abstract experiment design for "Brain only" and "AI Use" control/experimental groups, the conclusions can't be taken seriously. They feel "truthy," but that's all they can be

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646

@mttaggart I mean... the base assumption in the tri-system seems... unlikely to hold imo. Just from the abstract: "System 3 [AI use] can supplement or supplant internal processes, introducing novel cognitive pathways."

That seems like a very generous interpretation of LLM use, that places it in a special category outside of other tool use, which is already covered by systems 1 and 2...

@nielsa Yeah I didn't buy offloading as a novel pathway on spec.