We evaluate science mostly through papers. But researchers report that up to 75% of project effort is data work — collecting, cleaning, documenting, and preparing datasets. A reminder that research outputs ≠ research work.

New paper in Research Evaluation: https://doi.org/10.1093/reseval/rvag008

#ResponsibleMetrics #OpenScience #DataCitation #ResearchEvaluation

Most research evaluation still rewards papers, not the work that makes them possible. Yet researchers say up to 75% of a project can be data work: collecting, cleaning, curating, documenting.

 https://doi.org/10.1093/reseval/rvag008

Maybe it's time to stop pretending that publications alone represent research.

#OpenScience #ResearchEvaluation #DataCitation #ResponsibleMetrics #Scientometrics

New paper in Research Evaluation explores how researchers actually cite data. Key insight: data citations are far more complex than simple indicators of data reuse.

 https://doi.org/10.1093/reseval/rvag008

They reflect scientific practice, community norms, attribution, and even reputation-building. A timely reminder: metrics alone cannot capture the real value of data work.

#OpenScience #DataCitation #ResearchEvaluation #ResponsibleMetrics #Scientometrics

Back to the roots: reimagining scientific evaluation of research without peer review – InfoDoc MicroVeille

From ‘research impact’ to ‘research value’: a new approach to support research for societal benefit – InfoDoc MicroVeille

"Many of the loudest Open Science advocates are deeply embedded in the very systems they critique such as traditional publishing, prestige-driven academia and grant-dependent research cultures. They speak the language of reform while continuing to “play the game” remarkably well. Researchers who sit on advisory boards talk about preprints but then celebrate publishing their latest Nature paper"

https://www.themodernpeer.com/people-the-problem-in-open-science/

#OpenScience #OpenData #ScienceReform #Metascience #ResearchEvaluation #UniversityRankings #PublishOrPerish

People; the problem in Open Science?

If only we fixed the publishing system. If only we fixed the incentives, rewards and recognition. Open Science (OS) likes to position itself as a systems problem. And whilst these things do matter, decades of the same conversations and limited change is unveiling an uncomfortable truth: The biggest barrier to

the modern peer
Pour une éthique de l’intelligence artificielle dans le domaine de l’évaluation de la recherche – InfoDoc MicroVeille

Altmetrics in the evaluation of scholarly impact: a systematic and critical literature review – InfoDoc MicroVeille

A new article by Chloe Patton shows how debates about #OpenScience often slip into absurdity – like demanding #replication from the #Humanities. You can’t replicate history, culture, or interpretation the way you replicate a physics experiment. It’s a different kind of knowledge.

 https://doi.org/10.1093/reseval/rvaf052

Forcing STEM-style standards onto the humanities doesn’t improve #science – it just adds bureaucracy and limits academic freedom.

#Reproducibility #ResearchEvaluation #Replicability

Today at my alma mater, I spoke about how research evaluation is quietly shifting from citations to ChatGPT-style predictions.

👉 https://doi.org/10.13140/RG.2.2.30585.12642

AI can already “detect quality” from text alone, and sometimes performs better than classic metrics. But it doesn’t evaluate science: it rewards what sounds like good science. We may be heading from “publish or perish” to the new absurdity: “write ChatGPT-friendly or perish.”

#AI #ChatGPT #ResearchEvaluation #Scientometrics #LLM #OpenScience