Taylorism is a management philosophy based on using scientific optimization to maximize labor productivity and economic efficiency.

Here's the result of making the false Taylorist assumption that the output of scientific research is scientific papers—the more, faster, and cheaper, the better.

Papers are not the output of scientific research in the way that cars are the output of automobile manufacturing.

Papers are merely a vehicle through which a portion of the output of research is shared.

We confuse the two at our peril.

The entire idea of outsourcing the scientific ecosystem to LLMs — as described below — is a concept error that I can scarcely begin to get my head around.

sakana.ai/ai-scientist/

"While there are still occasional flaws in the papers produced by this first version..."

Meanwhile the authors note that the output itself fails to meet standards of scientific rigor, but treat this as a minor wrinkle, not a fundamental barrier imposed by using the wrong tool for the wrong job.

This system literally fabricates its methods section — an act which goes beyond bad science into the realm of serious scientific misconduct. This is more than a wrinkle to be ironed out.
@ct_bergstrom this kind of “feature” reinforces the idea I have that generative AI is *about* destroying credibility as an end goal.
@griotspeak @ct_bergstrom That part about the automated (presumably "self"-evaluating)peer review process...OMG🥴