❗Update on Plagiarism and Falsification in Scientific Publications

Today, I officially submitted requests for the retraction of articles to the editorial boards of the journals "Land Reclamation and Water Management" and "Bulletin of NUWM".

❗Key facts of misconduct:
🔹 Plagiarism: Unauthorized use of my original relief map (2015) without citation.
🔹 Falsification: Intentional removal (retouching) of the author’s stamp with coordinates and date-time on site photographs.
🔹 Duplication: Publication of identical material in 2021 and 2024.

Next steps:
In the event of attempts to covertly remove the articles without an official retraction notice, relevant complaints will be forwarded directly to @crossref and the international members of the editorial boards.

#OpenScience #ResearchMisconduct #AcademicMastodon #ScientificIntegrity #Hydrology #Geoscience #GIS #FediScience #Plagiarism #SvystunovaGully #COPE #Crossref #DataIntegrity #ImageManipulation #AcademicChatter #ResearchEthics #EarthScience #HigherEducation

An Italian researcher has been ordered to pay back approximately $51,000 in grants due to "massive" scientific misconduct in two of her published books.

https://www.plagiarismtoday.com/2026/05/13/researcher-ordered-to-repay-grants-over-alleged-plagiarism/

#Plagiarism #AcademicIntegrity #ResearchIntegrity #ScientificIntegrity

Researcher Ordered to Repay Grants Over Alleged Plagiarism

An Italian researcher has been ordered to pay back approximately $51,000 in grants due to “massive” scientific misconduct in two of her published books.

Plagiarism Today

"We propose a structured framework to help authors and journal editors and editorial offices distinguish between acceptable and unacceptable uses of generative AI in scientific publications. To operationalize this, we introduce a novel online reporting tool that guides authors in documenting AI use and generates a standardized, citable disclosure statement to ensure transparency and accountability."

#ai
#ScientificIntegrity

https://link.springer.com/article/10.1186/s41073-026-00212-3

A call for clarity: a unified checklist for reporting use of large language models in writing scientific manuscripts - Research Integrity and Peer Review

The rapid integration of generative artificial intelligence (Gen AI) into academic writing has outpaced the establishment of consistent norms for responsible and transparent disclosure. Leading organizations including the International Committee of Medical Journal Editors (ICMJE), the Committee on Publication Ethics (COPE), the World Association of Medical Editors (WAME), and the European Commission, Directorate-General for Research and Innovation have issued guidance affirming that AI tools cannot be listed as authors and must be transparently disclosed. However, what remains missing is both a cross-journal consensus and guidance for authors on what constitutes acceptable versus unacceptable use of Gen AI. Furthermore, as authors may employ Gen AI at multiple, distinct stages of manuscript preparation, they currently have no standardized or granular method to report this varied use in sufficient detail. This ambiguity creates a critical gap between high-level disclosure principles and practical implementation, threatening not necessarily the integrity of the underlying research itself, but the reader's and editor's ability to objectively assess the reliability and provenance of reported findings.This paper responds to that gap by proposing a structured, domain-based framework for reporting Gen AI use in scholarly manuscripts. Drawing on a synthesis of evolving editorial statements and guidelines, we outline three domains in which Gen AI is commonly employed: conceptual contributions, linguistic assistance, and research assistance. For each domain, we distinguish uses that are generally acceptable from those that raise ethical or integrity concerns, providing examples to guide authors, reviewers, and journal editors and editorial staff.To operationalize this framework, we introduce a prototype of an online Gen AI use disclosure form that guides authors through documenting their use of Gen AI across the three domains. The tool automatically generates a standardized disclosure statement and assigns a unique, citable reference number. This reference number links to a persistent, publicly accessible summary of the declared Gen AI use, creating a transparent and auditable record. This system is proposed as a 'living' platform, designed to evolve through consensus among journal editors and editorial staff, authors, and research integrity organizations, functioning similarly to other reporting guidelines hosted by the EQUATOR Network.This system moves beyond ad hoc, narrative statements to establish a proactive and standardized disclosure process around the use of Gen AI in scholarly publishing. By embedding transparency, human accountability, and traceability directly into the publication workflow, our approach complements existing frameworks for authorship and conflict of interest. Like conflict-of-interest disclosures, AI use statements surface information that allows readers to contextualize potential risks and judge credibility for themselves. Ultimately, this work advances a practical model to strengthen trust between authors, journal editors and editorial staff, and readers, aligning the promise of generative AI with the enduring principles of research integrity.

SpringerLink

AI-generated reference errors are increasingly entering scientific papers, with tens of thousands of 2025 publications potentially affected. The issue is shifting from simple citation mistakes to fully fabricated sources.

🌐 https://www.nature.com/articles/d41586-026-00969-z

#ArtificialIntelligence #ScientificIntegrity #ResearchPublishing #PeerReview #OpenScience

Hallucinated citations are polluting the scientific literature. What can be done?

Tens of thousands of publications from 2025 might include invalid references generated by AI, a Nature analysis suggests.

Haven't seen anyone on Fedi discussing the #NASEM #ScientificIntegrity event tomorrow and Friday.

If that sounds interesting to you, I beg you to look at this first: https://sciencebasedmedicine.org/nocensorship-2/
"An Open Letter to Professor Katy Milkman: Don’t Censor John Ioannidis, Jay Bhattacharya, and Emily Oster. Amplify Their Voices. It’s vital that your conference attendees know the speakers’ past credibility to judge their current credibility. All you have to do is be honest."

The agenda and livestream are here:
https://www.nationalacademies.org/projects/DBASSE-BBCSS-25-02/event/46519
And I have no doubt that some of the sessions will be very good.

An Open Letter to Professor Katy Milkman: Don’t Censor John Ioannidis, Jay Bhattacharya, and Emily Oster. Amplify Their Voices.

It's vital that your conference attendees know the speakers' past credibility to judge their current credibility. All you have to do is be honest.

Science-Based Medicine

The Medical Evidence Project, a venture of The Center for #ScientificIntegrity, aims to reduce harm to patients & improve outcomes by finding & publicizing serious errors in the medical literature. Under the directorship of James Heathers, PhD, the Medical Evidence Project uses forensic meta-analytical techniques to detect & then shine light on errors arising from low-quality science & fraudulent work in areas that involve large numbers of patients

https://medicalevidenceproject.org/

The Medical Evidence Project - Medical Evidence Project

The Medical Evidence Project, a venture of The Center for Scientific Integrity, aims to reduce harm to patients and improve outcomes by finding and publicizing serious errors in the medical literature. Under the directorship of James Heathers, Ph.D., the Medical Evidence Project uses forensic meta-analytical techniques to detect and then shine light on errors arising...

Medical Evidence Project

RE: https://social.sciences.re/@tito/116176928282506850

via @sophiehuiberts

This is a punchy take on the state of science as an industry.

With respect to the specific quote below, it shows what most scientists don't realise about the following three things: the gouging by "reputable" commercial publishers, the "predation" by scam publishers, and the fraud of the shadiest scientists.

These all lie on a continuum.

#scientificintegrity #scientificpublishing #diamondOA #openaccess

@benpatrickwill.bsky.so

Am I the only one who considers that using this tool (LLM that "summarizes" paywalled scientific papers to which you don't have access) basically amounts to scientific fraud?, ignoring basic rules of scientific integrity that you don't cite papers that you haven't read yourself?

#science #academics #scientificintegrity #scientificpublishing

Am I the only one who considers that using this tool (LLM that "summarizes" paywalled scientific papers to which you don't have access) basically amounts to scientific fraud?, ignoring basic rules of scientific integrity that you don't cite papers that you haven't read yourself?

https://bsky.brid.gy/r/https://bsky.app/profile/did:plc:rl2szulxujlgdcmx4avx7jyn/post/3mfd2i25c2c2g

https://www.science.org/content/article/journal-giant-elsevier-unveiled-ai-tool-scans-millions-paywalled-papers-it-worth-it

@benpatrickwill.bsky.social

#science #academics #scientificintegrity #scientificpublishing

Ben Williamson (@benpatrickwill.bsky.social)

The inevitable next stage of academic publishers profiting from academics' work is here - scraping it for AI then charging subscriptions for access to the AI summaries, and then again for the citations. Academic content assetization as we called it in a recent paper. https://www.science.org/content/article/journal-giant-elsevier-unveiled-ai-tool-scans-millions-paywalled-papers-it-worth-it

Bluesky Social
Science Is Drowning in AI Slop

Peer review has met its match.

The Atlantic