Generative AI can and should accelerate research evaluation reform to better recognize ‘distinctly human contributions’ – InfoDoc MicroVeille

A recent Journal of Informetrics study shows – There is no universal number of “too many authors.”

In some fields, 3–6 may already be unusual.
In medicine – dozens are common.
In physics – large teams are often the norm.

 https://doi.org/10.1016/j.joi.2026.101803

Yes, #hyperauthorship can signal problems (e.g., honorary authorship, metric inflation). But the key question is not “how many authors?” 👉 it is: Is this abnormal for this field and time?

#Scientometrics #ResearchEvaluation #Bibliometrics

We evaluate science mostly through papers. But researchers report that up to 75% of project effort is data work — collecting, cleaning, documenting, and preparing datasets. A reminder that research outputs ≠ research work.

New paper in Research Evaluation: https://doi.org/10.1093/reseval/rvag008

#ResponsibleMetrics #OpenScience #DataCitation #ResearchEvaluation

Most research evaluation still rewards papers, not the work that makes them possible. Yet researchers say up to 75% of a project can be data work: collecting, cleaning, curating, documenting.

 https://doi.org/10.1093/reseval/rvag008

Maybe it's time to stop pretending that publications alone represent research.

#OpenScience #ResearchEvaluation #DataCitation #ResponsibleMetrics #Scientometrics

New paper in Research Evaluation explores how researchers actually cite data. Key insight: data citations are far more complex than simple indicators of data reuse.

 https://doi.org/10.1093/reseval/rvag008

They reflect scientific practice, community norms, attribution, and even reputation-building. A timely reminder: metrics alone cannot capture the real value of data work.

#OpenScience #DataCitation #ResearchEvaluation #ResponsibleMetrics #Scientometrics

Back to the roots: reimagining scientific evaluation of research without peer review – InfoDoc MicroVeille

From ‘research impact’ to ‘research value’: a new approach to support research for societal benefit – InfoDoc MicroVeille

"Many of the loudest Open Science advocates are deeply embedded in the very systems they critique such as traditional publishing, prestige-driven academia and grant-dependent research cultures. They speak the language of reform while continuing to “play the game” remarkably well. Researchers who sit on advisory boards talk about preprints but then celebrate publishing their latest Nature paper"

https://www.themodernpeer.com/people-the-problem-in-open-science/

#OpenScience #OpenData #ScienceReform #Metascience #ResearchEvaluation #UniversityRankings #PublishOrPerish

People; the problem in Open Science?

If only we fixed the publishing system. If only we fixed the incentives, rewards and recognition. Open Science (OS) likes to position itself as a systems problem. And whilst these things do matter, decades of the same conversations and limited change is unveiling an uncomfortable truth: The biggest barrier to

the modern peer
Pour une éthique de l’intelligence artificielle dans le domaine de l’évaluation de la recherche – InfoDoc MicroVeille

Altmetrics in the evaluation of scholarly impact: a systematic and critical literature review – InfoDoc MicroVeille