It's been cool to be part of the #IJDH issue on #reproducibility and #explainability.

But my reservations regarding the publishing model are being confirmed:

There is a clear relationship between access modality and number of accesses (see barchart). The #openaccess articles (paid via individual APC or read and publish agreement) have many more accesses per month than the #closed access articles (no payment required), so far.

Open access advantage 😃
Rich country advantage 😟

@christof good and important point, and thanks for the glimpse into the numbers! I think it would help your case and #OpenAccess arguments in general, if these access statistics were included prominently in the #IJDH journal site.

Springer's own page e.g. at https://link.springer.com/article/10.1007/s42803-023-00068-9/metrics does not show much, only the page views of the abstract and its re-use in social media posts, if I understand correctly, but not the # of downloads.

CC @riesthorsten

Reproducibility, verifiability, and computational historical research - International Journal of Digital Humanities

Digital humanities methods have been at the heart of a recent series of high-profile historical research projects. But these approaches raise new questions about reproducibility and verifiability in a field of research where grounding one’s conclusions in a body of historical evidence is crucial. While there have been extensive debates about the nature and methods of historical research since the nineteenth century, the underlying assumption has generally been that documenting one’s sources in a series of footnotes is essential to enable other researchers to test the validity of the research. Even if this approach never amounted to “reproducibility” in the sense of scientific experimentation, it might still be seen as broadly analogous, since the evidence can be reassembled to see the basis for the explanations that were offered and to test their validity. This essay examines how new digital methods like topic modelling, network analysis, knowledge graphs, species models, and various kinds of visualizations are affecting the process of reproducing and verifying historical research. Using examples drawn from recent research projects, it identifies a need for thorough documentation and publication of the different layers of digital research: digital and digitized collections, descriptive metadata, the software used for analysis and visualizations, and the various settings and configurations.

SpringerLink

@dta_cthomas @riesthorsten

The data is all publicly available, I don't have any priviledged access: I just copied it from the article landing pages – e.g. here: https://link.springer.com/article/10.1007/s42803-023-00068-9 –, where publication date, number of accesses and number of citations are shown.

Reproducibility, verifiability, and computational historical research - International Journal of Digital Humanities

Digital humanities methods have been at the heart of a recent series of high-profile historical research projects. But these approaches raise new questions about reproducibility and verifiability in a field of research where grounding one’s conclusions in a body of historical evidence is crucial. While there have been extensive debates about the nature and methods of historical research since the nineteenth century, the underlying assumption has generally been that documenting one’s sources in a series of footnotes is essential to enable other researchers to test the validity of the research. Even if this approach never amounted to “reproducibility” in the sense of scientific experimentation, it might still be seen as broadly analogous, since the evidence can be reassembled to see the basis for the explanations that were offered and to test their validity. This essay examines how new digital methods like topic modelling, network analysis, knowledge graphs, species models, and various kinds of visualizations are affecting the process of reproducing and verifying historical research. Using examples drawn from recent research projects, it identifies a need for thorough documentation and publication of the different layers of digital research: digital and digitized collections, descriptive metadata, the software used for analysis and visualizations, and the various settings and configurations.

SpringerLink
@christof @riesthorsten yeah, – maybe I still misunderstand, but: the "access" numbers there are imo not reliable and not put together in a transparent way. Springer says on this "Accesses is an approximate count of unique views and downloads. This number can fluctuate depending on multiple factors." Which I find a bit confusing and would not trust it off-hand (+ had the chance to compare these with *actual* numbers of chapter downloads and sales of another Springer publication. Very different!)
@christof @riesthorsten obviously no criticism in your direction, the publisher could do a better job there; thanks again for making and illustrating your initial point!