1/

#OpenAccess book [1] by @deevybee "As we shall see, demonstrating that an intervention has an impact is much harder than it appears at first sight"

https://mastodon.social/@deevybee/110118670777140484

"Much of the attention of methodologists has focused on how to recognize and control for unwanted factors that can affect outcomes of interest. But psychology is also important: it tells us that own human biases can be just as important in leading us astray"

#Statistics #CognitiveBias #uncertainty #Complexity

2/
"#CitationBias is often unintentional, but is a consequence of the way humans think. Bishop (2020) [2] described a particular cognitive process, confirmation bias, which makes it much easier to attend to and remember things that are aligned with our prior expectations. #ConfirmationBias is a natural tendency that in everyday life that often serves a useful purpose in reducing our #CognitiveLoad, but which is incompatible with objective scientific thinking" [1]

#science #research

3/
"It’s not generally possible to avoid all #ConflictOfInterest, but the important thing is to recognize experimenter #bias as the rule rather than the exception, identify possible threats [...] to study validity, take stringent steps to counteract these, and report openly"

"being a good scientist often conflicts with our natural human tendencies" [1]

"Faulty reasoning results in shoddy #science, even when the intentions are good. Researchers need to become more aware of these #pitfalls" [3]

4/

#References

[1] Bishop, D.V.M., Thompson, P.A., 2023. Evaluating what works. Bookdown. https://purl.org/INRMM-MiD/z-DB2FTMIG

[2] Bishop, D.V.M., 2020. The psychology of experimental psychologists: overcoming cognitive constraints to improve research - The 47th Sir Frederic Bartlett Lecture. Quarterly Journal of Experimental Psychology 73, 1–19. https://doi.org/10.1177/1747021819886519

[3] Bishop, D.V.M., 2020. How scientists can stop fooling themselves over statistics. Nature 584, 9–9. https://doi.org/10.1038/d41586-020-02275-8

#DOI

Evaluating What Works

Introduction to methods for evaluating effectiveness of non-medical interventions

5/

Vast #CognitiveBias spectrum: from data/methods to #science vs. society/policy: e.g. [4]

"Converging evidence from the behavioural and brain sciences suggests that the human moral judgement system is not well equipped to identify #ClimateChange — a complex, large-scale and unintentionally caused phenomenon — as an important moral imperative. As climate change fails to generate strong moral intuitions, it does not motivate an urgent need for action in the way that other moral imperatives do"

6/
"Why climate change doesn’t register as a moral imperative

Certain features of #ClimateChange and the ways in which it is communicated to the public interact with the human moral judgement system to decrease individual perceptions of the issue as a moral imperative.
[...] we identify six primary challenges that prevent climate change from activating the human moral alarm system" and "strategies that communicators could use to increase recognition of climate change as a moral imperative" [4]

8/
#Science is not perfect but its distinctive self-correction ability is key, and it's not automatic. It's a process to foster, not an intrinsic property (as we all are subject to #CognitiveBias, believing that we'll fix this in science, once and for all, is quite an obvious #catch22 paradox - hint: "believing").

Research once honestly believed good may be revisioned later. However, sometimes a thesis/paradigm/school fights to survive beyond good faith, "against" #ScienceSelfCorrection

9/
E.g. #CitationBias can be mitigated: it may be a daunting experience, when serving as a peer reviewer, to periodically be forced to note how controversial publications should not be cited to corroborate a thesis without mentioning there is a controversy (meaning: the thesis might not be so obvious to defend after all).

At least, not without giving honest context (e.g. also citing the main criticisms, especially when they are not occasional but instead systematically repeated over decades)

10/
Why this is not marginal? May some very human #CognitiveBias act partly unnoticed?

E.g. @petersuber on a recent work:

https://fediscience.org/@petersuber/110130331112943032

From [5]: "It seems that the respondents assessed cited papers worse when they observed rather low paper impact values in the survey"
I.e. highly cited works seem to be considered more trustworthy

This may be an issue if controversial literature keeps getting cited, but the reason why it's controversial pass unnoticed [6]. More scrutiny might help

petersuber (@[email protected])

New study: When authors are asked to assess the #quality of an article they had previously cited, they tend to adjust their assessments in light of the paper's #citation count. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0283893

FediScience.org

11/
More awareness needed: e.g. [6] noted how papers corrected/criticised are believed to become "less worthy or trusty to the eyes of the scientific community and thus predestined to have low scientific impact"

Instead, surprisingly these "papers are more likely to be among the most cited papers of a journal"

A 2019 study [7]: "Once misconceptions proliferate wide and long enough, criticizing them not only becomes increasingly difficult, efforts may even contribute to the continued spreading"

12/
#Science is not perfect, but it proved for centuries it can self-correct. This is maybe one of its most impressive (and powerful) processes. However, it seems we can't count "automatically" on its distinctive self-correction [8].
Awareness of stakes/failures and "community" scrutiny maybe key for #ScienceSelfCorrection

#References-1

[5] Bornmann, L., et al., 2023. Anchoring effects in the assessment of papers: an empirical survey of citing authors. PLOS ONE 18 https://doi.org/10.1371/journal.pone.0283893

#DOI

Anchoring effects in the assessment of papers: An empirical survey of citing authors

In our study, we have empirically studied the assessment of cited papers within the framework of the anchoring-and-adjustment heuristic. We are interested in the question whether the assessment of a paper can be influenced by numerical information that act as an anchor (e.g. citation impact). We have undertaken a survey of corresponding authors with an available email address in the Web of Science database. The authors were asked to assess the quality of papers that they cited in previous papers. Some authors were assigned to three treatment groups that receive further information alongside the cited paper: citation impact information, information on the publishing journal (journal impact factor) or a numerical access code to enter the survey. The control group did not receive any further numerical information. We are interested in whether possible adjustments in the assessments can not only be produced by quality-related information (citation impact or journal impact), but also by numbers that are not related to quality, i.e. the access code. Our results show that the quality assessments of papers seem to depend on the citation impact information of single papers. The other information (anchors) such as an arbitrary number (an access code) and journal impact information did not play a (important) role in the assessments of papers. The results point to a possible anchoring bias caused by insufficient adjustment: it seems that the respondents assessed cited papers in another way when they observed paper impact values in the survey. We conclude that initiatives aiming at reducing the use of journal impact information in research evaluation either were already successful or overestimated the influence of this information.

13/

#References-2

[6] Radicchi, F., 2012. In science ”there is no bad publicity”: papers criticized in comments have high scientific impact. Scientific Reports 2, 815+. https://doi.org/10.1038/srep00815

[7] Letrud, K., Hernes, S., 2019. Affirmative citation bias in scientific myth debunking: A three-in-one case study. PLOS ONE 14, e0222213+. https://doi.org/10.1371/journal.pone.0222213

[8] Saltelli, A., Funtowicz, S., 2017. What is science’s crisis really about? Futures 91, 5–11. https://doi.org/10.1016/j.futures.2017.05.010

#DOI

@dderigo
In case it's of interest, see my 2008 essay on the role of #OpenAccess in facilitating scientific #SelfCorrection.
https://dash.harvard.edu/handle/1/4391168
Open access and the self-correction of knowledge

@petersuber
> my 2008 essay on the role of Open Access in facilitating scientific Self Correction.

I went to the page you linked to read this, but the link under "published version" appears to be broken. I've used the linked form to report the problem to the site admin, but I thought you might also want to know.

FYI that page is also trying you load JavaScript from two third-party domains; cloudflare.com and openrev.orv

@dderigo

@strypey @petersuber below a #PURL link to an archived copy (@internetarchive)

I found key this comment on #ScienceSelfCorrection
"it's precisely because individuals find it difficult to correct themselves, or precisely because they benefit from the perspectives of others, that we should employ means of correction that harness public scrutiny and #OpenAccess"

#Reference

Suber, P., 2008. Open access and the self-correction of knowledge. SPARC Open Access Newsletter 122. https://purl.org/INRMM-MiD/z-SCVLRJHP

Open access and the self-correction of knowledge

@strypey Thanks. I knew about the broken link to the published version. The Earlham server is temporarily down. I prefer the link to the copy in the Harvard repository, in part because it's more durable.
https://dash.harvard.edu/handle/1/4391168

BTW, *which* copy tries load cloudflare and openrev javascript? The Earlham copy or the Harvard copy? If you let me know, I'll follow it up.

Open access and the self-correction of knowledge

@petersuber
> which* copy tries load cloudflare and openrev javascript?

This page does:
https://dash.harvard.edu/handle/1/4391168

If you have the NoScript plug-in, you can see them both under Harvard.edu.

Open access and the self-correction of knowledge

@strypey
Thanks. I'm checking it out now.
@dderigo @dan613 Because humanity, for all its impressive accomplishments, remains deeply deeply stupid. The end.