I just published a review of the paper 'Pursuit of Knowledge: A Charter for Academic Renewal' by Paul Rainey and colleagues.

Paper: https://doi.org/10.5281/zenodo.17488783

Review: https://doi.org/10.5281/zenodo.17770964

Interesting paper, but solution proposed by authors doesn't fully convince me. The paper and my review may be of interest for @CoARAssessment.

Normally I use @prereview to publish my reviews, but in this case that wasn't possible, because paper isn't categorized as preprint in Zenodo.

#PublishYourReviews

Pursuit of Knowledge: A Charter for Academic Renewal

This paper identifies systemic problems in how academic research is structured, evaluated, and supported. It argues for a realignment of incentives and institutional cultures to restore trust, enable creativity, and preserve the public value of scholarship. Through a set of guiding principles and a framework for coordinated reform, it proposes a practical path forward.

Zenodo

I just reviewed the article 'PreprintToPaper dataset: connecting bioRxiv preprints with journal publications' by Fidan Badalova, Julian Sienkiewicz and Philipp Mayr.

Article: https://arxiv.org/abs/2510.01783v1

Review: https://prereview.org/reviews/17625492

@biorxivpreprint
@prereview
#PublishYourReviews

In addition to the interesting findings that the authors present about bibliometric reporting guidelines, the article also does a great job in showing the value of open peer review.

This study illustrates why openness should be the default in peer review!

@prereview #PublishYourReviews

I just reviewed the article 'Analysis of citation dynamics reveals that you do not receive enough recognition for your influential science' by Salsabil Arabi, Chaoqun Ni and B. Ian Hutchins.

Article: https://www.biorxiv.org/content/10.1101/2023.09.07.556750v3

Review: https://prereview.org/reviews/17335296

I like the empirical work done by the authors, but I disagree with the interpretation they give to their findings.

@prereview

#PublishYourReviews

I just reviewed the paper 'From Research to Impact: Assessing a Decade of CDC’s Public Health Science by Topic Area, 2014-2023' by Joy Ortega and colleagues.

Paper: https://doi.org/10.1101/2025.03.07.25323572
Review: https://prereview.org/reviews/16485942

@prereview

#PublishYourReviews

I just reviewed the paper 'An Agent-based Model of Citation Behavior' by George Chacko and colleagues.

Paper: https://arxiv.org/abs/2503.06579v1
Review: https://prereview.org/reviews/15959414

This is my first time using @prereview to publish a review. It was a very positive experience!

#PublishYourReviews

An Agent-based Model of Citation Behavior

Whether citations can be objectively and reliably used to measure productivity and scientific quality of articles and researchers can, and should, be vigorously questioned. However, citations are widely used to estimate the productivity of researchers and institutions, effectively creating a 'grubby' motivation to be well-cited. We model citation growth, and this grubby interest using an agent-based model (ABM) of network growth. In this model, each new node (article) in a citation network is an autonomous agent that cites other nodes based on a 'citation personality' consisting of a composite bias for locality, preferential attachment, recency, and fitness. We ask whether strategic citation behavior (reference selection) by the author of a scientific article can boost subsequent citations to it. Our study suggests that fitness and, to a lesser extent, out_degree and locality effects are influential in capturing citations, which raises questions about similar effects in the real world.

arXiv.org

I just published a review of this paper: https://ludowaltman.pubpub.org/pub/review-transformative-agreements/release/1. My review is rather short, since I don't have much to complain about this excellent paper!

#PublishYourReviews

Review of "Estimating transformative agreement impact on hybrid open access: A comparative large-scale study using Scopus, Web of Science and open metadata"

Homepage Ludo Waltman

Fascinating to see the different publishing practices in European countries.

This graph from https://arxiv.org/abs/2411.06282v1 shows the number of MDPI publications of a country as a proportion of the number of publications in 'big five' journals (Elsevier, SN, Wiley, T&F, Sage) and MDPI journals.

I just published a review of the article by Leon Kopitar and colleagues: https://ludowaltman.pubpub.org/pub/review-open-access-europe/release/1

#PublishYourReviews

Two scholarly publishing cultures? Open access drives a divergence in European academic publishing practices

The current system of scholarly publishing is often criticized for being slow, expensive, and not transparent. The rise of open access publishing as part of open science tenets, promoting transparency and collaboration, together with calls for research assesment reforms are the results of these criticisms. The emergence of new open access publishers presents a unique opportunity to empirically test how universities and countries respond to shifts in the academic publishing landscape. These new actors challenge traditional publishing models, offering faster review times and broader accessibility, which could influence strategic publishing decisions. Our findings reveal a clear division in European publishing practices, with countries clustering into two groups distinguished by the ratio of publications in new open access journals with accelerated review times versus legacy journals. This divide underscores a broader shift in academic culture, highlighting new open access publishing venues as a strategic factor influencing national and institutional publishing practices, with significant implications for research accessibility and collaboration across Europe.

arXiv.org

I just published a review of the paper 'Citation proximus: the role of social and semantic ties in citing behaviour' by Diego Kozlowski, @ipoga, @lariviev and others.

Paper: https://arxiv.org/abs/2502.13934v1
Review: https://ludowaltman.pubpub.org/pub/review-citation-proximus/release/1

#PublishYourReviews

Citation proximus: the role of social and semantic ties in citing behaviour

Citations are a key indicator of research impact but are shaped by factors beyond intrinsic research quality, including prestige, social networks, and thematic similarity. While the Matthew Effect explains how prestige accumulates and influences citation distributions, our study contextualizes this by showing that other mechanisms also play a crucial role. Analyzing a large dataset of disambiguated authors (N=43,467) and citation linkages (N=264,436) in U.S. economics, we find that close ties in the collaboration network are the strongest predictor of citation, closely followed by thematic similarity between papers. This reinforces the idea that citations are not only a matter of prestige but mostly of social networks and intellectual proximity. Prestige remains important for understanding highly cited papers, but for the majority of citations, proximity--both social and semantic--plays a more significant role. These findings shift attention from extreme cases of highly cited research toward the broader distribution of citations, which shapes career trajectories and the production of knowledge. Recognizing the diverse factors influencing citations is critical for science policy, as this work highlights inequalities that are not based on preferential attachment, but on the role of self-citations, collaborations, and mainstream versus no mainstream research subjects.

arXiv.org

I enjoyed reading the paper 'Policies on Artificial Intelligence Chatbots Among Academic Publishers: A Cross-Sectional Audit' by Jeremy Y. Ng and colleagues https://doi.org/10.1101/2024.06.19.24309148.

A review of the paper is available at https://doi.org/10.21428/7ccec04a.709cd1a5.

#PublishYourReviews

Policies on Artificial Intelligence Chatbots Among Academic Publishers: A Cross-Sectional Audit

Background Artificial intelligence (AI) chatbots are novel computer programs that can generate text or content in a natural language format. Academic publishers are adapting to the transformative role of AI chatbots in producing or facilitating scientific research. This study aimed to examine the policies established by scientific, technical, and medical academic publishers for defining and regulating the responsible authors’ use of AI chatbots. Methods This study performed a cross-sectional audit on the publicly available policies of 163 academic publishers, indexed as members of the International Association of the Scientific, Technical, and Medical Publishers (STM). Data extraction of publicly available policies on the webpages of all STM academic publishers was performed independently in duplicate with content analysis reviewed by a third contributor (September 2023 - December 2023). Data was categorized into policy elements, such as ‘proofreading’ and ‘image generation’. Counts and percentages of ‘yes’ (i.e., permitted), ‘no’, and ‘N/A’ were established for each policy element. Results A total of 56/163 (34.4%) STM academic publishers had a publicly available policy guiding the authors’ use of AI chatbots. No policy allowed authorship accreditations for AI chatbots (or other generative technology). Most (49/56 or 87.5%) required specific disclosure of AI chatbot use. Four policies/publishers placed a complete ban on the use of AI tools by authors. Conclusions Only a third of STM academic publishers had publicly available policies as of December 2023. A re-examination of all STM members in 12-18 months may uncover evolving approaches toward AI chatbot use with more academic publishers having a policy. ### Competing Interest Statement The authors have declared no competing interest. ### Clinical Protocols <https://doi.org/10.17605/OSF.IO/937ES> ### Funding Statement This study was unfunded. ### Author Declarations I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained. Yes I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals. Yes I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance). Yes I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable. Yes All relevant study materials and data are included in this manuscript or posted on the Open Science Framework. <https://doi.org/10.17605/OSF.IO/6HP9R> * AI : artificial intelligence ChatGPT : Chat Generative Pre-Trained Transformer STM : Scientific, Technical, and Medical OSF : Open Science Framework COPE : Committee on Publishing Ethics ICMJE : International Committee of Medical Journal Editors

medRxiv