So this article is making the rounds and mostly focuses on human aspect of using llms for research.

But there is a more immediate problem. I'm currently in a minor role in an imaging research lab evaluating another's labs output I don't know how else to says this but the software artifacts Just. Don't. Work.

You've got citations based on papers where it is practically impossible to independently evaluate any of the claims.

https://ergosphere.blog/posts/the-machines-are-fine/

The machines are fine. I'm worried about us.

On AI agents, grunt work, and the part of science that isn't replaceable.

The reproducibility problem is not new but the volume is so overwhelming that, make no mistake, it *is* now the norm.

Research can't be trusted any more.