New study: In #APC-based #OpenAccess journals of orthopedic surgery, APCs "are not proportional to and do not strongly correlate with" journal impact factors.
http://dx.doi.org/10.5435/JAAOSGlobal-D-25-00065
IFRS conquista prata e bronze no Jogos Nacionais da Rede Federal
The University of Utrecht is canceling the #WebOfScience and offering training sessions on #OpenAlex.
https://www.uu.nl/en/news/access-to-web-of-science-will-end-on-1-january-2026
"Discontinuing Web of Science is a logical step that fits in with the UU vision on #OpenScience. Closed commercial databases, such as Web of Science, are not in line with our desire to work with open research information as much as possible. The UU signed the #BarcelonaDeclaration in 2024 to this effect. The use of Journal Impact Factors [#JIFs] is also not in line with our vision of Open Science…With the funds freed up by not renewing the licence, we will continue to invest in open source research and infrastructure."
Update with a comment.
Don't throw in the towel. First, reform research #assessment to move away from journal impact factors (#JIFs) and to pay more attention to the quality of research than the number of publications or where they published. Second, move away from #APCs. To make research #OpenAccess, favor no-APC #GreenOA and #DiamondOA over APC-based varieties.
BTW, the Budapest Open Access Initiative 20th anniversary statement makes both these recommendations. (Disclosure: I was a co-author.)
https://www.budapestopenaccessinitiative.org/boai20/
Tired: Gaming journal impact factors (#JIFs).
Wired: Gaming journal quality factors (#JQFs), quality scores assigned by #ChatGPT.
Purpose: Journal Impact Factors and other citation-based indicators are widely used and abused to help select journals to publish in or to estimate the value of a published article. Nevertheless, citation rates primarily reflect scholarly impact rather than other quality dimensions, including societal impact, originality, and rigour. In contrast, Journal Quality Factors (JQFs) are average quality score estimates given to a journal's articles by ChatGPT. Design: JQFs were compared with Polish, Norwegian and Finnish journal ranks and with journal citation rates for 1,300 journals with 130,000 articles from 2021 in large monodisciplinary journals in the 25 out of 27 Scopus broad fields of research for which it was possible. Outliers were also examined. Findings: JQFs correlated positively and mostly strongly (median correlation: 0.641) with journal ranks in 24 out of the 25 broad fields examined, indicating a nearly science-wide ability for ChatGPT to estimate journal quality. Journal citation rates had similarly high correlations with national journal ranks, however, so JQFs are not a universally better indicator. An examination of journals with JQFs not matching their journal ranks suggested that abstract styles may affect the result, such as whether the societal contexts of research are mentioned. Limitations: Different journal rankings may have given different findings because there is no agreed meaning for journal quality. Implications: The results suggest that JQFs are plausible as journal quality indicators in all fields and may be useful for the (few) research and evaluation contexts where journal quality is an acceptable proxy for article quality, and especially for fields like mathematics for which citations are not strong indicators of quality. Originality: This is the first attempt to estimate academic journal value with a Large Language Model.
Update. These researchers built an #AI system to predict #REF #assessment scores from a range of data points, inc #citation rates. For individual works, the system was not very accurate. But for total institutional scores, it was 99.8%. "Despite this, we are not recommending this solution because in our judgement, its benefits are marginally outweighed by the perverse incentive it would generate for institutions to overvalue journal impact factors."
https://blogs.lse.ac.uk/impactofsocialsciences/2023/01/16/can-artificial-intelligence-assess-the-quality-of-academic-journal-articles-in-the-next-ref/
In this blog post Mike Thelwall, Kayvan Kousha, Paul Wilson, Mahshid Abdoli, Meiko Makita, Emma Stuart and Jonathan Levitt discuss the results of a recent project for UKRI that made recommendations…
#Clarivate has modified journal impact factors (#JIFs) in response to an "increase in both the quantity & sophistication of fraudulent behaviors."
https://clarivate.com/blog/2024-journal-citation-reports-changes-in-journal-impact-factor-category-rankings-to-enhance-transparency-and-inclusivity
It's now cultivating the false & invidious impression that journals w/o JIFs are somehow untrustworthy or fraudulent.
"We have evolved the JIF from an indicator of scholarly impact (the numerical value of the JIF)…to an indicator of both…impact & trustworthiness (having a JIF – regardless of the number)."
The 2024 Journal Citation Reports coming in June will feature new unified rankings for each of our 229 science and social science categories; no Journal Impact Factor (JIF)™ rankings for the arts and humanities categories.
New study: "We find that the number of papers cited at least as well as those appearing in high-impact factor journals vastly exceeds the number of papers published in such venues…We also find that approximately half of researchers never publish in a venue with an impact factor above 15,…raising the possibility that [assessments based on journal impact factors, #JIFs] may recognize as little as 10-20% of the work that warrants recognition."
https://www.biorxiv.org/content/10.1101/2023.09.07.556750v1