https://matthewfacciani.substack.com/p/the-social-psychology-of-misinformation
http://www.matthewfacciani.com
Honored to see Misguided getting early support from one of the leading psychologists studying misinformation and prebunking. Huge thanks for the kind words & excited to keep building on this work together!
RE: https://www.threads.com/@profsander.vanderlinden/post/DKjwwJvMGf7
Just got an advance copy of @matthewfacciani's MISGUIDED in the mail. It's such an important topic and great overview of the problem and what we can do to solve it! Out next month https://cup.columbia.edu/book/misguided/9780231555814/
Citations and Trust in LLM Generated Responses
Yifan Ding, Matthew Facciani, Amrit Poudel, Ellen Joyce, Salvador Aguinaga, Balaji Veeramani, Sanmitra Bhattacharya, Tim Weninger
https://arxiv.org/abs/2501.01303 https://arxiv.org/pdf/2501.01303 https://arxiv.org/html/2501.01303
arXiv:2501.01303v1 Announce Type: new
Abstract: Question answering systems are rapidly advancing, but their opaque nature may impact user trust. We explored trust through an anti-monitoring framework, where trust is predicted to be correlated with presence of citations and inversely related to checking citations. We tested this hypothesis with a live question-answering experiment that presented text responses generated using a commercial Chatbot along with varying citations (zero, one, or five), both relevant and random, and recorded if participants checked the citations and their self-reported trust in the generated responses. We found a significant increase in trust when citations were present, a result that held true even when the citations were random; we also found a significant decrease in trust when participants checked the citations. These results highlight the importance of citations in enhancing trust in AI-generated content.
Question answering systems are rapidly advancing, but their opaque nature may impact user trust. We explored trust through an anti-monitoring framework, where trust is predicted to be correlated with presence of citations and inversely related to checking citations. We tested this hypothesis with a live question-answering experiment that presented text responses generated using a commercial Chatbot along with varying citations (zero, one, or five), both relevant and random, and recorded if participants checked the citations and their self-reported trust in the generated responses. We found a significant increase in trust when citations were present, a result that held true even when the citations were random; we also found a significant decrease in trust when participants checked the citations. These results highlight the importance of citations in enhancing trust in AI-generated content.
Previous research has extensively investigated why users spread misinformation online, while less attention has been given to the motivations behind sharing fact-checks. This article reports a four-country survey experiment assessing the influence of confirmation and refutation frames on engagement with online fact-checks. Respondents randomly received semantically identical content, either affirming accurate information (āIt is TRUE that pā) or refuting misinformation (āIt is FALSE that not pā). Despite semantic equivalence, confirmation frames elicit higher engagement rates than refutation frames. Additionally, confirmation frames reduce self-reported negative emotions related to polarization. These findings are crucial for designing policy interventions aiming to amplify fact-check exposure and reduce affective polarization, particularly in critical areas such as health-related misinformation and harmful speech.
Although prebunking games have shown promise in Western and English-speaking contexts, there is a notable lack of research on such interventions in countries of the Global South. In response to this gap, we developed Gali Fakta, a new kind of media literacy game specifically tailored for an Indonesian audience. Our findings indicate that participants who