196 Followers
92 Following
397 Posts
Expert in peer review research and innovation
Chair of peer review workbench: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4211833
Vice President of EASE: https://ease.org.uk
Advisory board member of Peer Review Congress:https://peerreviewcongress.org/
#IEDA #ScholarlyPublishing #academia #fedi22Peer review research and innovation expert
#climateemergency #climatejustice #climatechangeWorried for our kid’s future on Earth
#Mahsa_Amini #Woman_Life_Freedom #IranRespect human dignity
#tiffanylamps #book #music #gardening #jugging #catsHobbies

@bahar and Mario Malički have just published their structured peer review checklist in European Science Editing. lt helps researchers to ensure they provide thorough and constructive feedback for each section of a manuscript they review. They also suggest how journal editors and publishers can leverage the checklist throughout their peer review process.

https://ese.arphahub.com/article/137675/

#PeerReview #JournalEditing #EuropeanScienceEditing #AcademicPublishing #ScholarlyPublishing #Checklists #Elsevier

Structured peer review: implementation and checklist development

To address the low overlap between reviewer comments and the publication recommendations they make, as well as to suggest guidance on what kind of peer review report would benefit journals and editors the most, we introduced structured peer review to Elsevier journals and analyzed its effect in our June 2024 paper: peerj.com/articles/17514/ . To further promote the implementation of the structured peer review process and help reviewers prepare thorough review reports, in this paper, we present our set of structured peer review questions in a checklist format.

European Science Editing
There are “no links whatsoever between the offshore wind development activity and especially the humpback whale mortalities. None. Zero.” But oil & gas & shipping interests sure want you to think wind energy is what's killing whales https://www.scientificamerican.com/article/whales-are-dying-but-not-from-offshore-wind/
Whales Are Dying but Not from Offshore Wind

Politicians and nonprofit groups have blamed offshore wind turbines for whale deaths, but the science doesn’t support those claims—at all

Scientific American
hot off the press: https://peerj.com/articles/17514/
We analyze the impact of introducing Structured Peer Review: on reviewers' performance. This is built on the earlier experiment where I asked editors' and authors' ratings of the quality of peer review reports. we independently rated each review report's quality using Review Quality Instrument only to find out that there is not much alignment between the "perception of quality" and the actual quality measured by RQI https://onlinelibrary.wiley.com/doi/10.1002/leap.1344
#PeerReview
Structured peer review: pilot results from 23 Elsevier journals

Background Reviewers rarely comment on the same aspects of a manuscript, making it difficult to properly assess manuscripts’ quality and the quality of the peer review process. The goal of this pilot study was to evaluate structured peer review implementation by: 1) exploring whether and how reviewers answered structured peer review questions, 2) analysing reviewer agreement, 3) comparing that agreement to agreement before implementation of structured peer review, and 4) further enhancing the piloted set of structured peer review questions. Methods Structured peer review consisting of nine questions was piloted in August 2022 in 220 Elsevier journals. We randomly selected 10% of these journals across all fields and IF quartiles and included manuscripts that received two review reports in the first 2 months of the pilot, leaving us with 107 manuscripts belonging to 23 journals. Eight questions had open-ended fields, while the ninth question (on language editing) had only a yes/no option. The reviews could also leave Comments-to-Author and Comments-to-Editor. Answers were independently analysed by two raters, using qualitative methods. Results Almost all the reviewers (n = 196, 92%) provided answers to all questions even though these questions were not mandatory in the system. The longest answer (Md 27 words, IQR 11 to 68) was for reporting methods with sufficient details for replicability or reproducibility. The reviewers had the highest (partial) agreement (of 72%) for assessing the flow and structure of the manuscript, and the lowest (of 53%) for assessing whether interpretation of the results was supported by data, and for assessing whether the statistical analyses were appropriate and reported in sufficient detail (52%). Two thirds of the reviewers (n = 145, 68%) filled out the Comments-to-Author section, of which 105 (49%) resembled traditional peer review reports. These reports contained a Md of 4 (IQR 3 to 5) topics covered by the structured questions. Absolute agreement regarding final recommendations (exact match of recommendation choice) was 41%, which was higher than what those journals had in the period from 2019 to 2021 (31% agreement, P = 0.0275). Conclusions Our preliminary results indicate that reviewers successfully adapted to the new review format, and that they covered more topics than in their traditional reports. Individual question analysis indicated the greatest disagreement regarding the interpretation of the results and the conducting and the reporting of statistical analyses. While structured peer review did lead to improvement in reviewer final recommendation agreements, this was not a randomized trial, and further studies should be performed to corroborate this. Further research is also needed to determine whether structured peer review leads to greater knowledge transfer or better improvement of manuscripts.

PeerJ
Reviewer #2 in Pride month " Your writing style at times is perhaps at times too colorful" #peerreview #PrideMonth

Update: his death greatly exaggerated? (https://www.msn.com/en-in/news/India/what-happened-to-noam-chomsky-reports-of-american-professors-death-false-says-wife/ar-BB1osWnx)

I may not have agreed with every one of his positions in later life but Noam Chomsky was one of the defining people in opening my eyes to the world as it is versus how I’d taken it to be at face value (as someone who didn’t have to question it while growing up in a relatively privileged environment).

Rest in peace, Noam. The world has lost a great mind and an even greater conscience today.

https://jacobin.com/2024/06/noam-chomsky-obituary-media-theory-elites/

#noamChomsky

MSN

To all scientific editors in my bubble: don’t leave reviewers who agreed to review behind. Don’t make a decision on a manuscript if not everyone you invited has completed their work within the due date you agreed with them. If you must, send them an email or at least a templates uninvited message. It hurts to get a response from you upon asking why I can’t see the mns in my assignments that says: “oh, I decided to go ahead without you!” Think about the consequences #peerreview #ethics #integrity

Call for Manuscripts for the "Journal of Systematic #Reviews" (JSR), a newly launched #OpenAccess journal

JSR is an international, peer-reviewed, bi-annual, multidisciplinary, #OA journal devoted to all aspects of the design, conduct and reporting of #SystematicReviews, #ScopingReviews, and #MetaAnalyses.

The journal’s mission is to publish original papers which contribute to the advancement of #science, and specially the #methodology of systematic reviews.

https://digitalization.site/index.php/jsr

Journal of Systematic Reviews (JSR)

Journal of Systematic Reviews (JSR) is an international open access journal devoted to theory and practice of systematic reviews and meta-analyses.

Here is the recording of our SAGER round table celebrating the launch of the Sex and Gender Equity in Research guidelines course on Researcher Academy @EASE @elsevierconnect
https://researcheracademy.elsevier.com/research-preparation/sex-gender-equity-research-sager-guidelines/unpacking-sager-guidelines
Unpacking the SAGER Guidelines: A Roundtable Discussion on Achieving Gender Equity in Research

Researcher Academy and GENDRO are excited to invite you to a roundtable discussion on Sex and Gender Equity in Research (SAGER). The SAGER Guidelines are a comprehensive procedure for reporting sex and gender information in study design, data analysis, results, and interpretations of findings. They are designed to guide authors in preparing their manuscripts and to assist editors in integrating assessment of sex and gender in all manuscripts as an integral part of the editorial process.This roundtable discussion is your opportunity to learn more about the SAGER Guidelines from frontline experts who helped produce them, including Dr. Shirin Heidari. There will also be a Q&A session, where attendees will have the chance to ask any questions they may have about the guidelines and how to apply them in their work.Our expert panel will include individuals with extensive experience in the field who have contributed significantly to developing the SAGER Guidelines. By attending this live session, you'll gain valuable insights into the importance of considering sex and gender in research and the best practices for implementing the SAGER Guidelines in your work.

Elsevier Researcher Academy
In 2.5 hours from now I will open the round table with Shirin HeidariAgnieszka Freda Vivienne Bachelet Mikashmi Kohli and Simone Carter. We will talk about SAGER guidelines, and how they came about. what's the role of publishers? and how the community is taking them up further. Join us and tell us how you see them relevant to your area of work and research
#PeerReview #SexAndGenderEquityInResearch #SAGER #EDI https://researcheracademy.elsevier.com/research-preparation/sex-gender-equity-research-sager-guidelines/unpacking-sager-guidelines
Unpacking the SAGER Guidelines: A Roundtable Discussion on Achieving Gender Equity in Research

Researcher Academy and GENDRO are excited to invite you to a roundtable discussion on Sex and Gender Equity in Research (SAGER). The SAGER Guidelines are a comprehensive procedure for reporting sex and gender information in study design, data analysis, results, and interpretations of findings. They are designed to guide authors in preparing their manuscripts and to assist editors in integrating assessment of sex and gender in all manuscripts as an integral part of the editorial process.This roundtable discussion is your opportunity to learn more about the SAGER Guidelines from frontline experts who helped produce them, including Dr. Shirin Heidari. There will also be a Q&A session, where attendees will have the chance to ask any questions they may have about the guidelines and how to apply them in their work.Our expert panel will include individuals with extensive experience in the field who have contributed significantly to developing the SAGER Guidelines. By attending this live session, you'll gain valuable insights into the importance of considering sex and gender in research and the best practices for implementing the SAGER Guidelines in your work.

Elsevier Researcher Academy

More and more papers are written with the help of LLMs.

More and more reviews are written with the help of LLMs.

An important problem here is that AI reviewers prefers AI writers, which creates a feedback loop nudging more AI-driven paper writing and reviews -- meaning more hollow garbage "knowledge" may proliferates in science.

https://arxiv.org/abs/2405.02150

The AI Review Lottery: Widespread AI-Assisted Peer Reviews Boost Paper Scores and Acceptance Rates

Journals and conferences worry that peer reviews assisted by artificial intelligence (AI), in particular, large language models (LLMs), may negatively influence the validity and fairness of the peer-review system, a cornerstone of modern science. In this work, we address this concern with a quasi-experimental study of the prevalence and impact of AI-assisted peer reviews in the context of the 2024 International Conference on Learning Representations (ICLR), a large and prestigious machine-learning conference. Our contributions are threefold. Firstly, we obtain a lower bound for the prevalence of AI-assisted reviews at ICLR 2024 using the GPTZero LLM detector, estimating that at least $15.8\%$ of reviews were written with AI assistance. Secondly, we estimate the impact of AI-assisted reviews on submission scores. Considering pairs of reviews with different scores assigned to the same paper, we find that in $53.4\%$ of pairs the AI-assisted review scores higher than the human review ($p = 0.002$; relative difference in probability of scoring higher: $+14.4\%$ in favor of AI-assisted reviews). Thirdly, we assess the impact of receiving an AI-assisted peer review on submission acceptance. In a matched study, submissions near the acceptance threshold that received an AI-assisted peer review were $4.9$ percentage points ($p = 0.024$) more likely to be accepted than submissions that did not. Overall, we show that AI-assisted reviews are consequential to the peer-review process and offer a discussion on future implications of current trends

arXiv.org