Serhii Nazarovets

559 Followers
274 Following
614 Posts
Ph.D. in Social Communication. My research interests: Bibliometrics, Scientometrics, Scholarly Communication, and Library Science.
LocationKyiv, Ukraine
Bloghttps://panbibliotekar.blogspot.com
ORCIDhttps://orcid.org/0000-0002-5067-4498
ResearchGatehttps://www.researchgate.net/profile/Serhii-Nazarovets

A recent Journal of Informetrics study shows – There is no universal number of “too many authors.”

In some fields, 3–6 may already be unusual.
In medicine – dozens are common.
In physics – large teams are often the norm.

 https://doi.org/10.1016/j.joi.2026.101803

Yes, #hyperauthorship can signal problems (e.g., honorary authorship, metric inflation). But the key question is not “how many authors?” 👉 it is: Is this abnormal for this field and time?

#Scientometrics #ResearchEvaluation #Bibliometrics

New blog post on @lseimpactblog about our project. Why global databases are not enough, and why national scholarly infrastructures matter more than we think.

💡 https://blogs.lse.ac.uk/impactofsocialsciences/2026/04/01/by-linking-national-scholarly-infrastructures-we-can-better-understand-the-impact-of-global-research/

The solution is not to replace global systems, but to connect national ones into a network of interoperable, open infrastructures.

#OpenScience #Bibliometrics #OpenInfrastructure #ResearchPolicy

By linking national scholarly infrastructures we can better understand the impact of global research - LSE Impact

Global scholarly information systems provide poor coverage for social science and humanities research taking place outside of the anglophone world and in languages other than English. Paul Donner, Stephan Stahlschmidt, Serhii Nazarovets, Igor Cojocaru, Irina Cojocaru, Marina Razmadze and Shushanik Sargsyan highlight a range of national initiatives taking place aimed at improving scholarly data for

LSE Impact - Understanding impact and practice in academic research

Our new 📄 in Current Alzheimer Research looks at a strange, and worrying, phenomenon in scientific writing: tortured phrases. Instead of blood-brain barrier, some papers use bizarre alternatives like blood-brain obstruction or blood-cerebrum boundary.

 https://doi.org/10.2174/0115672050460224260206052444

These are not just language errors, they can signal deeper issues such as weak #PeerReview or even #PaperMills.

#OpenScience #ResearchIntegrity #Bibliometrics #Neuroethics #AcademicPublishing

Our correspondence in @Nature is out today - “AI used in warfare needs a strong ethical framework”.

📄 https://doi.org/10.1038/d41586-026-01008-7

We argue that the real question is not how effective AI is in war, but:

Who does it actually protect?
Who controls these systems?
Who is accountable for their failures?
And are decisions that risk civilian harm ever acceptable?

#AI #Ethics #WarAndAI #AIethics

A new paper by Ioannidis & Baas highlights an uncomfortable shift: most scientific publications today come from countries that are not full democracies and have limited press freedom. In 2006, about two-thirds of global science was produced in full democracies. In 2024 – only 22%.

https://doi.org/10.1186/s41073-026-00190-6

Even more striking – 78% of publications come from countries with problematic press freedom, and there is no link between democracy and scientific productivity.

#SciencePolicy #Democracy

An interesting study on humour in scientific talks (531 presentations, 870 jokes):

67% of jokes failed.
Only ~9% got real laughter.

Men joke slightly more, and native English speakers are more likely to succeed.

🙂 https://doi.org/10.1098/rspb.2025.3000

O yes... Joking in a foreign language is hard, and even in your own, it only works if the audience truly gets it.

#AcademicHumor #ConferenceLife #AcademicLife #ResearchCulture #ScholarlyCommunication

We evaluate science mostly through papers. But researchers report that up to 75% of project effort is data work — collecting, cleaning, documenting, and preparing datasets. A reminder that research outputs ≠ research work.

New paper in Research Evaluation: https://doi.org/10.1093/reseval/rvag008

#ResponsibleMetrics #OpenScience #DataCitation #ResearchEvaluation

Most research evaluation still rewards papers, not the work that makes them possible. Yet researchers say up to 75% of a project can be data work: collecting, cleaning, curating, documenting.

 https://doi.org/10.1093/reseval/rvag008

Maybe it's time to stop pretending that publications alone represent research.

#OpenScience #ResearchEvaluation #DataCitation #ResponsibleMetrics #Scientometrics

New paper in Research Evaluation explores how researchers actually cite data. Key insight: data citations are far more complex than simple indicators of data reuse.

 https://doi.org/10.1093/reseval/rvag008

They reflect scientific practice, community norms, attribution, and even reputation-building. A timely reminder: metrics alone cannot capture the real value of data work.

#OpenScience #DataCitation #ResearchEvaluation #ResponsibleMetrics #Scientometrics

Where do bibliometricians come from? 🤔 A new international study suggests a simple answer: mostly from academic libraries. Around 60% of people doing bibliometric work at universities are based there.

 https://doi.org/10.1177/01655515261417634

The catch? Over 70% say they never had formal training in bibliometrics. People simply grow into the role while working with databases, indicators and research analytics.

#bibliometrics #scientometrics #researchmetrics #responsiblemetrics #openscience