Matthew Facciani

@matthewfacciani
751 Followers
463 Following
230 Posts
Social scientist. Postdoc at University of Notre Dame. Studies polarization, misinformation, & media literacy. #SciComm #BiInSci šŸ³ļøā€šŸŒˆā€Øā€ØTo get summaries of my research and be notified when my book is available, sign up for my newsletter:
http://www.matthewfacciani.com
Starting August 25th, I’m teaching a 4-week online course on the psychology of misinformation—how it spreads, how to spot it, and how AI/algorithms shape what we see. Course details and syllabus here!
https://matthewfacciani.substack.com/p/the-social-psychology-of-misinformation
The Social Psychology of Misinformation: A 4-Week Online Course

Starting August 25th, learn why misinformation spreads, how to spot it in real life, and how AI and algorithms shape today’s information landscape.

Misguided: The Newsletter
🚨 New course announcement! 🚨
I’m launching my first-ever online course on the social psychology of misinformation — and you can join from anywhere.

Over 4 weeks, we’ll explore why false information spreads, why we believe it, and what we can do about it. Expect research-backed insights, real-world examples, and practical tools you can use right away.
https://matthewfacciani.substack.com/p/announcing-my-first-online-course
Announcing My First Online Course: The Social Psychology of Misinformation

Join me for a 4-week deep dive into why false information spreads, how it shapes us, and what we can do about it.

Misguided: The Newsletter

Honored to see Misguided getting early support from one of the leading psychologists studying misinformation and prebunking. Huge thanks for the kind words & excited to keep building on this work together!

RE: https://www.threads.com/@profsander.vanderlinden/post/DKjwwJvMGf7

Sander van der Linden (@profsander.vanderlinden) on Threads

Just got an advance copy of @matthewfacciani's MISGUIDED in the mail. It's such an important topic and great overview of the problem and what we can do to solve it! Out next month https://cup.columbia.edu/book/misguided/9780231555814/

Threads

Citations and Trust in LLM Generated Responses

Yifan Ding, Matthew Facciani, Amrit Poudel, Ellen Joyce, Salvador Aguinaga, Balaji Veeramani, Sanmitra Bhattacharya, Tim Weninger
https://arxiv.org/abs/2501.01303 https://arxiv.org/pdf/2501.01303 https://arxiv.org/html/2501.01303

arXiv:2501.01303v1 Announce Type: new
Abstract: Question answering systems are rapidly advancing, but their opaque nature may impact user trust. We explored trust through an anti-monitoring framework, where trust is predicted to be correlated with presence of citations and inversely related to checking citations. We tested this hypothesis with a live question-answering experiment that presented text responses generated using a commercial Chatbot along with varying citations (zero, one, or five), both relevant and random, and recorded if participants checked the citations and their self-reported trust in the generated responses. We found a significant increase in trust when citations were present, a result that held true even when the citations were random; we also found a significant decrease in trust when participants checked the citations. These results highlight the importance of citations in enhancing trust in AI-generated content.

Citations and Trust in LLM Generated Responses

Question answering systems are rapidly advancing, but their opaque nature may impact user trust. We explored trust through an anti-monitoring framework, where trust is predicted to be correlated with presence of citations and inversely related to checking citations. We tested this hypothesis with a live question-answering experiment that presented text responses generated using a commercial Chatbot along with varying citations (zero, one, or five), both relevant and random, and recorded if participants checked the citations and their self-reported trust in the generated responses. We found a significant increase in trust when citations were present, a result that held true even when the citations were random; we also found a significant decrease in trust when participants checked the citations. These results highlight the importance of citations in enhancing trust in AI-generated content.

arXiv.org
I wrote about my new podcast in my newsletter! It will launch in January 2025 at the latest. Those who sign up to support the podcast annually will receive a signed copy of my forthcoming book!
https://matthewfacciani.substack.com/p/introducing-misguided-the-podcast
Introducing Misguided: The Podcast

Tackling misinformation, one episode at a time

Misguided: The Newsletter
Here is a quick political knowledge test where you can compare your score to the average American. Only 12% of Americans get all six questions correct!
https://www.pewresearch.org/politics/quiz/what-do-you-know-about-the-u-s-government/
What do you know about the U.S. government?

Test your civics knowledge by taking our short 6-question quiz.

Pew Research Center
A new study found that in the year following Elon Musk's acquisition of Twitter (now X), users reported their feeds becoming more negative and featuring less reliable content. They also said they were less likely to use Twitter/X.
https://osf.io/preprints/psyarxiv/acbwg
OSF

New study finds that fact-checks framed as "confirmations" (e.g., "It is TRUE that...") lead to higher engagement compared to "refutation" frames (e.g., "It is FALSE that..."). This pattern was consistent across four countries and the "confirmation" fact-checks also reduced self-reported negative emotions related to polarization.
https://www.nature.com/articles/s41598-024-53337-0?fromPaywallRec=true #MisinfoResearch
Framing fact-checks as a ā€œconfirmationā€ increases engagement with corrections of misinformation: a four-country study - Scientific Reports

Previous research has extensively investigated why users spread misinformation online, while less attention has been given to the motivations behind sharing fact-checks. This article reports a four-country survey experiment assessing the influence of confirmation and refutation frames on engagement with online fact-checks. Respondents randomly received semantically identical content, either affirming accurate information (ā€œIt is TRUE that pā€) or refuting misinformation (ā€œIt is FALSE that not pā€). Despite semantic equivalence, confirmation frames elicit higher engagement rates than refutation frames. Additionally, confirmation frames reduce self-reported negative emotions related to polarization. These findings are crucial for designing policy interventions aiming to amplify fact-check exposure and reduce affective polarization, particularly in critical areas such as health-related misinformation and harmful speech.

Nature
People who played our 5-minute online game significantly increased their ability to detect misinformation! Our media literacy game, Gali Fakta, was designed for an Indonesian audience. Our open-access article is available here:
https://misinforeview.hks.harvard.edu/article/playing-gali-fakta-inoculates-indonesian-participants-against-false-information/
#MisinfoResearch
Playing Gali Fakta inoculates Indonesian participants against false information | HKS Misinformation Review

Although prebunking games have shown promise in Western and English-speaking contexts, there is a notable lack of research on such interventions in countries of the Global South. In response to this gap, we developed Gali Fakta, a new kind of media literacy game specifically tailored for an Indonesian audience. Our findings indicate that participants who

Misinformation Review