A new article by Chloe Patton shows how debates about #OpenScience often slip into absurdity – like demanding #replication from the #Humanities. You can’t replicate history, culture, or interpretation the way you replicate a physics experiment. It’s a different kind of knowledge.

 https://doi.org/10.1093/reseval/rvaf052

Forcing STEM-style standards onto the humanities doesn’t improve #science – it just adds bureaucracy and limits academic freedom.

#Reproducibility #ResearchEvaluation #Replicability

Today at my alma mater, I spoke about how research evaluation is quietly shifting from citations to ChatGPT-style predictions.

👉 https://doi.org/10.13140/RG.2.2.30585.12642

AI can already “detect quality” from text alone, and sometimes performs better than classic metrics. But it doesn’t evaluate science: it rewards what sounds like good science. We may be heading from “publish or perish” to the new absurdity: “write ChatGPT-friendly or perish.”

#AI #ChatGPT #ResearchEvaluation #Scientometrics #LLM #OpenScience

The recent debate in JoI highlights a key issue often ignored in research evaluation - the impact of document types on citation indicators:

 https://doi.org/10.1016/j.joi.2025.101738

When all publication types are counted, normalized metrics become inconsistent and misleading. But once we restrict the analysis to articles and reviews, correlations rise sharply, and results become robust and reproducible.

#ResearchEvaluation #Bibliometrics #SciencePolicy #Ukraine #Metrics

8/8 📚 Read the full open-access study: "The cultural impact of the impact agenda in Australia, UK and USA" in Research Evaluation. Time to rethink how we measure and support meaningful research contributions! 🌍 #OpenScience #ResearchEvaluation
9/9
6. Real impact: In case studies, h-index ranked a 2-paper author with 31K citations (1000+ co-authors each) same as a 7-paper author with 446 citations (small teams). SBCI properly distinguished their contributions. #ResearchEvaluation #FairMetrics
7/8

I currently have about a dozen papers under review. Now imagine: a drone hits my window — and who will keep emailing editors and reviewers then? 😅

Stewart Manley published his brilliant idea in #ResearchEvaluation the “exclusive option”. Authors could submit to multiple journals at once, and interested editors request an exclusive right to review.

 https://doi.org/10.1093/reseval/rvaf027

No duplicated #peerreview. No endless delays. This could shake up academic publishing!

#AuthorRights #OpenScience

Journal Citation Reports 2025 released

Clarivate has launched the new edition of Journal Citation Reports with updated data for scientific journal evaluation and quartile verification by thematic ...

Honored to receive an Award of Appreciation from the Ministry of Education and Science of Ukraine for my contribution to the evaluation of research projects. Proud to stand with Ukrainian science.
#UkraineScience #ResearchEvaluation #ScienceForUkraine #OpenScience #PeerReview #DistributedPeerReview

📢 New blog post! The Evaluation and Culture focal area at CWTS reflects on two years of work toward fairer research evaluation, inclusive cultures, and better scholarly communication.

Read here 👉 https://www.leidenmadtrics.nl/articles/setting-the-course-our-first-two-years-in-the-focal-area-evaluation-culture

#researchculture #scholarlycommunication #researchevaluation

Setting the course: our first two years in the focal area Evaluation & Culture

The Evaluation and Culture focal area at CWTS is dedicated to studying, discussing, and advocating for renewed forms of scholarly communication, fair research evaluation, and inclusive research cultures. This blog post offers a reflection on our activities and progress over the past two years.

Science of science — Citation models and research evaluation – InfoDoc MicroVeille