Taylorism is a management philosophy based on using scientific optimization to maximize labor productivity and economic efficiency.

Here's the result of making the false Taylorist assumption that the output of scientific research is scientific papers—the more, faster, and cheaper, the better.

Papers are not the output of scientific research in the way that cars are the output of automobile manufacturing.

Papers are merely a vehicle through which a portion of the output of research is shared.

We confuse the two at our peril.

The entire idea of outsourcing the scientific ecosystem to LLMs — as described below — is a concept error that I can scarcely begin to get my head around.

sakana.ai/ai-scientist/

"While there are still occasional flaws in the papers produced by this first version..."

Meanwhile the authors note that the output itself fails to meet standards of scientific rigor, but treat this as a minor wrinkle, not a fundamental barrier imposed by using the wrong tool for the wrong job.

This system literally fabricates its methods section — an act which goes beyond bad science into the realm of serious scientific misconduct. This is more than a wrinkle to be ironed out.

Scientists: We need to slow down the publication race and produce higher quality papers at a slower rate to make the literature manageable again.

Engineers: We hear you. Now every lab in the world will be able to produce hundreds of medium-quality papers (with a few mistakes in each) every week.

I do appreciate the authors' candor in detailing failure modes.

A system that makes difficult-to-catch mistakes in implementation, fails to compare quantitative data appropriately, and fabricates entire results—maybe I have high standards but I don't see this as writing "medium-quality" papers.

Here's the weird Taylorism again. The system produces work at the level of an early trainee requiring substantive supervision. This is not good ROI for producing papers.

The primary output of time invested in trainee research is the development of independent scientists—not the research papers.

@ct_bergstrom Thanks a lot for making explicit many aspects of what is - to say the least - worrying about this work.

A similar aspect to the point you make about "developing independent scientists", and related to many points that @emilymbender is frequently making, stems from the perspective of human competence: doing research is how individual researchers learn *themselves* about the domain, doing research etc. There is no way of letting someone else do A + then being able to do A yourself.

@ct_bergstrom @emilymbender

As a society, then I'd argue that we want _people_ who can do research. 1) Because of their singularly human way of experiencing the world (as opposed to e.g., how bats or AIs ... etc. experience the world) and 2) Because humans are also different we'd like to have diversity in human researchers, not just the single human researcher who can still do it.