AI Use at Work Is Causing "Brain Fry," Researchers Find, Especially Among High Performers
AI Use at Work Is Causing "Brain Fry," Researchers Find, Especially Among High Performers
Interesting article.
I would share this with my colleagues on our ‘AI Discussions’ channel. But I know what the result will be. “Those people just aren’t using the agents correctly”, “they need to provide the agents with moar context!!1!”, “this article is bad because I don’t like what it says”, “those respondents are just lazy or stupid”.
Personally, I’ve noticed this kind of mental exhaustion myself. I’ve tried leaning more heavily into AI usage because my employer encourages it. But it’s usually so damn frustrating.
I’ve found even the better/cutting edge LLMs struggle with basic troubleshooting, even when you provide them with solid context and try to keep the scope limited. Half the time they do great, but the other half they fail pretty spectacularly, and I end up wasting time trying to police/hand-hold them.
And I can’t even rely on these LLMs to reliably perform more menial tasks like formatting CSV data into JSON. They usually just stop the conversion after some arbitrary point, or they fuck up the structure of the output. Again, no matter how much context it detail I provide them.
These are all things some of my colleagues have found as well. Meanwhile, I’m also seeing other people become overly reliant on LLMs/agents, and accept whatever slop they produce as gospel while claiming it as their own work.
And that’s not even covering the knowledge/skill atrophy that I’ve witnessed. A lot of people learn and hone skills through repetition. But overuse of AI kills that opportunity, while offering unreliable immediate results.
I can’t find the article, but this was the paper: arxiv.org/abs/2602.11988.
I just wanted to point out that the myriad of “best practice” articles and information on AI are not very rigorous. So, not even necessarily an argument against AI (the paper sure isn’t). Even then it got pushback.

A widespread practice in software development is to tailor coding agents to repositories using context files, such as AGENTS.md, by either manually or automatically generating them. Although this practice is strongly encouraged by agent developers, there is currently no rigorous investigation into whether such context files are actually effective for real-world tasks. In this work, we study this question and evaluate coding agents' task completion performance in two complementary settings: established SWE-bench tasks from popular repositories, with LLM-generated context files following agent-developer recommendations, and a novel collection of issues from repositories containing developer-committed context files. Across multiple coding agents and LLMs, we find that context files tend to reduce task success rates compared to providing no repository context, while also increasing inference cost by over 20%. Behaviorally, both LLM-generated and developer-provided context files encourage broader exploration (e.g., more thorough testing and file traversal), and coding agents tend to respect their instructions. Ultimately, we conclude that unnecessary requirements from context files make tasks harder, and human-written context files should describe only minimal requirements.
And I can’t even rely on these LLMs to reliably perform more menial tasks like formatting CSV data into JSON. They usually just stop the conversion after some arbitrary point, or they fuck up the structure of the output. Again, no matter how much context it detail I provide them.
Use them to produce a script that converts CSV into JSON instead of having the LLM do it directly. More transparent, reliable, and resource efficient
It’s a symptom of the times. Saw an ad earlier this week “tired, stressed? This will rob your body of important nutrients, get this vitamin pill to replenish them!”. Instead of reducing stress.
Somehow, maybe through the societal focus on individualism, we’ve gotten to this point where it’s all the individuals fault. It’s all about doing whatever it takes to get ahead, screw your body and peers, if you can’t take it, you’re too weak and a loser. But we’ll sell you products to cope! And people internalize this, are proud of working more efficiently at the cost of their health. That’s the really sad part, many actually start wanting this.
The system actively works against people and tries to train them to yearn for more of the same.
Colloquially, it’s the rat race. There’s nothing inherently wrong with striving nor sacrifice, but we must ask: to what end? At what cost? To what success if it costs you your soul? It better be worth it.
Social focus on individualism sounds like a dog’s focus on chocolate.
“My thinking wasn’t broken, just noisy — like mental static,” the senior manager continued. “What finally snapped me out of it was realizing I was working harder to manage the tools than to actually solve the problem.”
Omg constantly fact checking and tweaking the lying machine is actually slower than just thinking. Who would have guessed?
It’s modern javascript/typescript you’re talking about, isn’t it?
They’re an edge case IMO, and largely the issue is actually modernizing old languages. If you take something like Go or Rust as an example for a modern language, it’s actually nice because the tooling is standardized for the language. In particular, the dependency management is built into the standard tooling.
Now you can always make it more complex by doing things like including docker in your build pipeline and maybe even creating pipelines that automatically deploy to kubernetes on a successful build… But all that is completely optional for like 99% of scenarios.