We're so fucked.

"Our analysis of a selection of questionable GPT-fabricated scientific papers found in Google Scholar shows that many are about applied, often controversial topics susceptible to disinformation: the environment, health, and computing. The resulting enhanced potential for malicious manipulation of society’s evidence base, particularly in politically divisive domains, is a growing concern."

https://misinforeview.hks.harvard.edu/article/gpt-fabricated-scientific-papers-on-google-scholar-key-features-spread-and-implications-for-preempting-evidence-manipulation/

GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation | HKS Misinformation Review

Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research. Our analysis of a selection of

Misinformation Review
@ct_bergstrom It's poisoning the well, misinformation and ignorance are great for creating an underclass to exploit and who aren't smart enough to argue back.

@nini @ct_bergstrom And that bit about reaching a point where there's no longer an aim of twisting or replacing the truth but destroying the notion of truth altogether.

Vanishing all signal in a total wave of noise means inviting all who fear, or are overwhelmed by, the responsibility of decisionmaking to amputate all higher mental function, reason, values, offering carte blanche to run with whatever is most gratifying, easiest, most comfortable. Become livestock.