A call for clarity: a unified checklist for reporting use of large language models in writing scientific manuscripts - Research Integrity and Peer Review
The rapid integration of generative artificial intelligence (Gen AI) into academic writing has outpaced the establishment of consistent norms for responsible and transparent disclosure. Leading organizations including the International Committee of Medical Journal Editors (ICMJE), the Committee on Publication Ethics (COPE), the World Association of Medical Editors (WAME), and the European Commission, Directorate-General for Research and Innovation have issued guidance affirming that AI tools cannot be listed as authors and must be transparently disclosed. However, what remains missing is both a cross-journal consensus and guidance for authors on what constitutes acceptable versus unacceptable use of Gen AI. Furthermore, as authors may employ Gen AI at multiple, distinct stages of manuscript preparation, they currently have no standardized or granular method to report this varied use in sufficient detail. This ambiguity creates a critical gap between high-level disclosure principles and practical implementation, threatening not necessarily the integrity of the underlying research itself, but the reader's and editor's ability to objectively assess the reliability and provenance of reported findings.This paper responds to that gap by proposing a structured, domain-based framework for reporting Gen AI use in scholarly manuscripts. Drawing on a synthesis of evolving editorial statements and guidelines, we outline three domains in which Gen AI is commonly employed: conceptual contributions, linguistic assistance, and research assistance. For each domain, we distinguish uses that are generally acceptable from those that raise ethical or integrity concerns, providing examples to guide authors, reviewers, and journal editors and editorial staff.To operationalize this framework, we introduce a prototype of an online Gen AI use disclosure form that guides authors through documenting their use of Gen AI across the three domains. The tool automatically generates a standardized disclosure statement and assigns a unique, citable reference number. This reference number links to a persistent, publicly accessible summary of the declared Gen AI use, creating a transparent and auditable record. This system is proposed as a 'living' platform, designed to evolve through consensus among journal editors and editorial staff, authors, and research integrity organizations, functioning similarly to other reporting guidelines hosted by the EQUATOR Network.This system moves beyond ad hoc, narrative statements to establish a proactive and standardized disclosure process around the use of Gen AI in scholarly publishing. By embedding transparency, human accountability, and traceability directly into the publication workflow, our approach complements existing frameworks for authorship and conflict of interest. Like conflict-of-interest disclosures, AI use statements surface information that allows readers to contextualize potential risks and judge credibility for themselves. Ultimately, this work advances a practical model to strengthen trust between authors, journal editors and editorial staff, and readers, aligning the promise of generative AI with the enduring principles of research integrity.