Our paper on the poor replicability of Tolman, Richie & Kalish, 1946 - the famous "Sunburst maze" - is out!

Last author @rmgrieves made a very nice thread about it here:
https://fediscience.org/@rmgrieves/115844897663215766

The article: Tolman's Sunburst Maze 80 Years on: A Meta-Analysis Reveals Poor Replicability and Little Evidence for Shortcutting

I'll just add that this doesn't at all question the #CognitiveMap theory, which is quite strongly supported by diverse lines of evidence - but it shows that one of the elements previously used to support the theory, the ability to choose a shortcut over unexplored space, is not that clearly demonstrated!

Importantly, this also shows that it is crucial to consider papers in their context and to make sure results are consistently replicable before changing them into scientific facts!

PS: I managed to attract Roddy to Mastodon - it would be great to show him we can get at least as much interaction on here as with his similar thread on Bluesky

#SpatialCognition #Neuroscience #Shortcutting #Tolman #Sunburst #ReplicabilityCrisis

Can humans and animals really use internal maps to navigate and take shortcuts?

Tolman famously argued "yes" - based largely on his Sunburst maze experiment.

However, our new review & meta-analysis suggests the evidence is far weaker than you might think.
🧵👇 https://doi.org/10.1111/ejn.70365
1/

#neuroESC #navigation #neuroscience #neuroethology #SpatialCognition #AnimalBehaviour #shortcutting #cognitivemap #tolman

New Article in R&D Management: »Anticipating Knowledge Applicability in Open Science Through Recycling, Mimicking, and Shortcutting«

Model of anticipatory applicability throughout the R&D process.

During the Covid-19 pandemic, several ventures tried to develop vaccines that are not protected by patents and could be fast and easily distributed acround the globe. In the course of the research project “Organizing Creativity under Regulatory Uncertainty: Alternative Approaches to Intellectual Property” (funded by the Austrian Science Fund FWF and the German Research Foundation DFG), we collected data on such alternative, more open approaches to pharmaceutical R&D.

It is with great pleasure that a paper comparing five such cases has now been published in the journal R&D Management. Check out the abstract of the article entitled “Anticipating Knowledge Applicability in Open Science Through Recycling, Mimicking, and Shortcutting” and co-authored with my former PhD student Milena Leybold and long-term collaborators Konstantin Hondros and Sigrid Quack below:

Open science literature scrutinizes how organizations provide access to knowledge. Yet, much less is known about how organizations pursuing open science for societal impact anticipate knowledge applicability—that shared knowledge is reusable for other organizations and individuals, and enables open social innovation. Mobilizing a practice perspective on open science, we investigate how organizations create knowledge that is applicable for participation and further use. Focusing on vaccine research and development during the COVID-19 pandemic, we zoom in on five organizations that pursue open science with the goal of making vaccines available worldwide. We identify three practices of creating knowledge that the organizations employ when doing open science. They recycle accessible knowledge, mimic vaccine designs, and shortcut parts of the approval processes. Beyond facilitating accessibility, these practices create knowledge to support knowledge applicability: they constitute a relation between knowledge creation and sharing that anticipates multiple contexts for the reuse of knowledge. The paper argues that the resulting ‘anticipatory applicability’ leverages open science for societal impact.

The article is available as an open access full text over at R&D Management. And in addition, I have also created the obligatory 1paper1meme below:

#anticipatoryApplicability #mimicking #openSocialInnovation #RDManagement #recycling #shortcutting

The risk of #shortcutting in #deeplearning #algorithms for #medical imaging research -
its danger, how complex it can be, and how hard it is to counter.

.. the experiment ( to train models to do two things they should not be able to do: predict which patients avoid consuming refried #beans or #beer purely by examining their #knee X-rays) emphasises the importance of rigorous review and assessment of AI-supported evaluations and diagnoses.
#AI
https://www.nature.com/articles/s41598-024-79838-6

The risk of shortcutting in deep learning algorithms for medical imaging research - Scientific Reports

While deep learning (DL) offers the compelling ability to detect details beyond human vision, its black-box nature makes it prone to misinterpretation. A key problem is algorithmic shortcutting, where DL models inform their predictions with patterns in the data that are easy to detect algorithmically but potentially misleading. Shortcutting makes it trivial to create models with surprisingly accurate predictions that lack all face validity. This case study shows how easily shortcut learning happens, its danger, how complex it can be, and how hard it is to counter. We use simple ResNet18 convolutional neural networks (CNN) to train models to do two things they should not be able to do: predict which patients avoid consuming refried beans or beer purely by examining their knee X-rays (AUC of 0.63 for refried beans and 0.73 for beer). We then show how these models’ abilities are tied to several confounding and latent variables in the image. Moreover, the image features the models use to shortcut cannot merely be removed or adjusted through pre-processing. The end result is that we must raise the threshold for evaluating research using CNNs to proclaim new medical attributes that are present in medical images.

Nature

@franklinlopez
Nice.

#Shortcut the #shortcutting, haha. You might be lucky to find where the shortcuts are in a config file and just copy them that way in future.

(BTW #hashtags don't seem to work in CWs. t would be good if they did, though.)