6/ 📚 Read the full study:
🔗 “Open Science at the Generative AI Turn”
Published in Quantitative Science Studies (MIT Press):
👉 https://doi.org/10.1162/qss_a_00337
Let’s work together to ensure that #AI & #GenAI align with the values of #OpenScience!
6/ 📚 Read the full study:
🔗 “Open Science at the Generative AI Turn”
Published in Quantitative Science Studies (MIT Press):
👉 https://doi.org/10.1162/qss_a_00337
Let’s work together to ensure that #AI & #GenAI align with the values of #OpenScience!
1/ 🚨 NEW PAPER! “Open Science at the Generative AI Turn”
In a new study just published in Quantitative Science Studies, we explore how GenAI can both enable and challenge Open Science, and why GenAI will benefit from adopting the values of Open Science. 🧵
Abstract. Debates about appropriate, fair and effective ways of assessing research and researchers have raged through the scientific community for decades,
In our new study, based on qualitative analysis of free-text responses from 121 international researchers to a survey, we examined researchers' perceptions of the social and political dimensions influencing research assessment processes.
We highlight a significant discrepancy between formal evaluation criteria and their practical application, demonstrating the “performativity of assessment criteria” and raising the question to what extent criteria can or should be transparently communicated.
Reform of research assessment, especially to avoid issues of over-quantification and empower qualitative assessment, is an increasingly hot topic.
Part of the debate concerns the tension between the rigidity and flexibility of assessment criteria. Should they be set in stone to avoid biases or remain flexible to allow tailor-made assessments?
New Paper!
“Understanding the social and political dimensions of research(er) assessment: evaluative flexibility and hidden criteria in promotion processes at research institutes”, just published in Research Evaluation by me, Noemie Aubert Bonn and Serge Horbach.
Abstract. Debates about appropriate, fair and effective ways of assessing research and researchers have raged through the scientific community for decades,