1/

In 2017, Adam Shapiro noted [1] that "the ability to discover [...] is inseparable from the social, economic, and political circumstances within which scientists work. [...]
If scientists see themselves as fighting a battle against ignorance and #denial, they should know that those movements also have a history. [...] #science and objectivity can have a complex political #history, and [...] the discovery of facts can have a cultural and social basis—and “alternative facts” can still be lies"

2/

This tension between #research vs "ignorance and denial" reminds me a concept that B. Brecht made Galileo say within his play "Leben des Galilei" [2]:

"Truth is the child of time, not of #authority. Our ignorance is infinite, so let us diminish it by a fraction. [...]

One of the chief causes of poverty in science is usually imaginary wealth. The aim of #science is not to open a door to infinite wisdom, but to set a limit to infinite error."

3/

A commentary in Science noted [3] how "at the heart of multiple #global #crises" lie the "study of #climate processes and patterns and the role of human activities in these phenomena"

"The massive attack on the #science and the scientists behind vaccines, pathogen transmission, and public #health during the #COVID19 #pandemic and beyond is well documented, as are attacks on basic science #education and the practice of science" [3]

4/

The commentary continues by noting how even basic data (and methods to derive, analyse, and summarise them, I'd add) may be manipulated e.g. by pressing for desired outcomes:

"Even in the arena of #BiodiversityConservation, there is growing politicization of the #data and political targeting of the scientists producing it. [...] Core #research on #health, #climate, human biology, and biodiversity is being undermined by private foundations, governments, and anti-science ideologues" [3]

5/

#References

[1] Shapiro, A.R., 2017. News flash: science has always been political. In: Macroscope, American Scientist. Sigma Xi, The Scientific Research Honor Society, p. 3906+. https://www.americanscientist.org/node/3906

[2] Brecht, B., The Life of Galileo (orig. "Leben des Galilei")
(mentioned excerpts: e.g. see
- https://archive.org/details/LifeOfGalileo-BertoltBrecth/page/n33/mode/2up
- https://archive.org/details/LifeOfGalileo-BertoltBrecth/page/n55/mode/2up )

[3] Fuentes, A., 2024. Scientists as political advocates. Science 386 (6724), eadt7194+. https://doi.org/10.1126/science.adt7194

#DOI
@tryingbiology

6/

Another trend with the potential for a massive attack on #science vs policy/society:

@juttahaider et al. [4] build on top of known points

- "the ability to determine the value and status of scientific publications for lay people is at stake when misleading articles are passed off as reputable"

- scientific publication indexing & dedicated search engines "can be and [have] been exploited for manipulating the evidence base for politically charged issues and to fuel #conspiracy narratives"

7/

The authors highlight [4] how "questionable and potentially manipulative #GPT-fabricated papers permeate the #research infrastructure and are likely to become a widespread phenomenon."

Their findings appear to "underline that the risk of #fake scientific papers being used to maliciously manipulate evidence [...] must be taken seriously.

Manipulation may involve [...] explicit scientific claims, or the concealment of errors in studies so that they are difficult to detect in #PeerReview"

8/

On #FakeScience use to maliciously manipulate evidence, and "information disorders" vs society and policy:
"the mere possibility of these things happening is a significant risk in its own right that can be strategically exploited and will have ramifications for #trust in and perception of #science.

#Society’s methods of evaluating sources and the foundations of media and information #literacy are under threat and public trust in science is at risk of further erosion"

9/

The authors [4] define "the strategic and coordinated malicious manipulation of society’s evidence base" as "evidence hacking". I'd prefer the term "#EvidenceCracking" or "#EvidenceSubversion", as #hacking ("Playfully doing something difficult, whether useful or not" [5]) is a very misleading term for this.

Instead, it migth recall the idea by C. Grimsley of "pseudo-hacking" as radical denaturing [6] "which superficially resemble hacking but lack the necessary sensitivity to social context"

10/

The authors in [4] on this threat for #science: "It is important not to present this as a technical problem that exists only because of #AI text generators but to relate it to the wider concerns in which it is embedded" e.g.

- "a largely dysfunctional scholarly #publishing system"

- "academia’s “#PublishOrPerish” paradigm"

- "Google’s near-monopoly"

- "ideological battles over the control of #information and ultimately #knowledge" [4]

11/

#References

[4] Haider, J., et al., 2024. GPT-fabricated scientific papers on Google Scholar: key features, spread, and implications for preempting evidence manipulation. Harvard Kennedy School Misinformation Review 5 (5). https://doi.org/10.37016/mr-2020-156

[5] Stallman, R.M., 2002. On Hacking. https://www.stallman.org/articles/on-hacking.html

[6] Grimsley, C., 2022. Contextualizing artificial intelligence: the history, values, and epistemology of technology in the philosophy of science. https://doi.org/10.13023/ETD.2022.199

#DOI

GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation | HKS Misinformation Review

Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research. Our analysis of a selection of

Misinformation Review

12/

Examples of paths for #FakeScience to manipulate evidence (in future, potentially, maybe even in a deliberate malicious way): their impact on #SystematicReview studies [7]

There is "a growing number of systematic review authors who have lost faith in the evidence base they depend on".

"The size of the problem is not clear, but a manuscript posted to the Center for Open Science’s OSF preprint server in September suggests up to one in seven published papers are fabricated or falsified"

13/

Otto Kalliokoski during an "analysis of almost 600 papers [...] found that 19 % of them had images with hallmarks of fakery, he reported in a preprint" [7]
he authored with J.P. Berrío [8]:

"The sheer prevalence of problematic studies, and the fact that we could not find a simple pattern for identifying them, undermines the validity of systematic reviews within our #research field. We suspect that this is symptomatic of a broader problem that needs immediate addressing"

#ResearchIntegrity

14/

#References

[7] Else, H., 2024. ‘Systematic reviews’ that aim to extract broad conclusions from many studies are in peril. Science 386 (6725), 955. https://doi.org/10.1126/science.zpnivp6

[8] Berrío, J.P., Kalliokoski, O., 2024. Fraudulent studies are undermining the reliability of systematic reviews – A study of the prevalence of problematic images in preclinical studies of depression. bioRxiv 580196+. https://doi.org/10.1101/2024.02.13.580196

#DOI #science #ScienceEthics #epistemology #bias #ResearchIntegrity

15/

Study by S. Westwood [9] shows "that reasoning-based LLMs can complete surveys with plausible responses and can generate results that would #bias measures of public opinion. They can mimic human personas, evade current detection methods, and be trivially programmed to systematically bias online #survey outcomes. The era of having to only deal with crude #bots and inattentive humans is over; the threat is now sophisticated, scalable, and potentially existential" for surveys on public opinion

16/

The study notes [9]:

"The immediate consequence is that the vast majority of our standard tools for data quality are now insufficient [...] For those who study and rely on #PublicOpinion, the stakes are far higher. The ease with which these synthetic respondents can be engineered to respond with plausible but biased opinion—even with prompts written in a foreign language—turns public polling from a tool for democratic accountability into a potential vector for #InformationWarfare"

17/

Press release of the study [9] (https://web.archive.org/web/20251119175519/https://www.eurekalert.org/news-releases/1106172 ):

In 43000 tests, the "tool passed 99.8 % of attention checks designed to detect automated responses, made zero errors on logic puzzles, and successfully concealed its nonhuman nature"

Implications far beyond election polling, but e.g.
"When programmed to favor either Democrats or Republicans, presidential approval ratings swung from 34 % to either 98 % or 0 %. Generic ballot support went from 38 % Republican to either 97 % or 1 %"

How AI can rig polls

New research from Dartmouth reveals that artificial intelligence can now corrupt public opinion surveys at scale—passing every quality check, mimicking real humans, and manipulating results without leaving a trace. The findings, published in the Proceedings of the National Academy of Sciences, show just how vulnerable polling has become.

EurekAlert!

18/

The study [9] warns: "reasoning bots introduce nonrandom, #SystematicBias [...] Unlike random noise, which often attenuates effects, synthetic demand effects can produce results that appear plausible or even compelling"

#EvidenceSubversion risk:

"insidious because hypothesis-confirming data can be more difficult for even conscientious researchers to detect"
"risk is that such data [...] could inadvertently lead to a proliferation of false positives, undermining the scientific process"

19/

On potential for malicious manipulation, the study [9] tested if "a single instruction could systematically alter responses to a sensitive #geopolitical question"

and noted: "A malicious actor could cheaply #bias public opinion measures to align with external priorities"

#References

[9] Westwood, S.J., 2025. The potential existential threat of large language models to online survey research. Proceedings of the National Academy of Sciences 122 (47), e2518075122+. https://doi.org/10.1073/pnas.2518075122