This proposes a way of using AI agents to produce research. Ok. But this bit is a pipe dream: "And human scientists should retain authority over — and responsibility for — framing the question, validating the path and signing off on conclusions." Here's why...
/1
RE: https://bsky.app/profile/did:plc:jviud2kbpxo3lwd3do4mqepg/post/3mghvizp6x22kAs soon as you start down this road, the volume of output - not just code and logic, which they describe, but results and conclusions - immediately surpasses the human capacity to read and assess it. And the people running such a process are still driven by our current institutional incentives.
/2
They fall in love with the process, trust it too much, and start rubber-stamping the results. Some scientists already "co-author" literally more papers than they have time to read. What is this agent-driven process going to do to our ACTUALLY EXISTING SCHOLARLY COMMUNICATION SYSTEM? Destroy it.
/3
The units of output are still "papers," and these processes immediately produce more than anyone EXCEPT MACHINES can read and evaluate. The authors of this paper - if they haven't already - will be proposing agentic "peer review" and publication tomorrow when they realize that's the only option.
/4
AI agents can "do" science like they propose, but the idea that it will be supervised and assessed by humans is a dangerous myth. It can't happen. This is why so many of the AI "declarations" I already see on papers are bullshit.
/5