We're so fucked.

"Our analysis of a selection of questionable GPT-fabricated scientific papers found in Google Scholar shows that many are about applied, often controversial topics susceptible to disinformation: the environment, health, and computing. The resulting enhanced potential for malicious manipulation of society’s evidence base, particularly in politically divisive domains, is a growing concern."

https://misinforeview.hks.harvard.edu/article/gpt-fabricated-scientific-papers-on-google-scholar-key-features-spread-and-implications-for-preempting-evidence-manipulation/

GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation | HKS Misinformation Review

Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research. Our analysis of a selection of

Misinformation Review
@ct_bergstrom and that is just the beginning. Have you seen this? https://sakana.ai/ai-scientist/ (link to arXiv is also in the post)
Sakana AI

The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery

Carl T. Bergstrom (@[email protected])

Attached: 1 image Taylorism is a management philosophy based on using scientific optimization to maximize labor productivity and economic efficiency. Here's the result of making the false Taylorist assumption that the output of scientific research is scientific papers—the more, faster, and cheaper, the better.

FediScience.org