In hindsight this study may be the marker of the AI-driven loss of trust in online communities, especially those building on social connection and exchange. We documented AI-induced loss of trust in our very first paper on #AIMediatedCommunication 1/ dl.acm.org/doi/abs/10.1...

RE: https://bsky.app/profile/did:plc:fbyvkq5nj5oq35fjetpzhoks/post/3logfictjzs2c

AI-Mediated Communication | Pr...
Bluesky

Bluesky Social
A couple of years ago, we started looking at the impact of AI on online communities -- but since AI content is often hard to detect, we instead looked at how users and moderators respond to suspected AI use. 2/
The first paper from @travislloydphd.bsky.social, aptly titled "There's a lot that we're missing", was based on interviews with subreddit moderators (including the affected subreddit). Moderators described a bunch of concerns and the limited techniques they use to address the challenge. 3/

"There Has To Be a Lot That We...
travis lloyd (træve) @ 🇯🇵CHI25🇯🇵 (@travislloydphd.bsky.social)

public interest technologist // luddite hacker. information science phd at cornell tech. inequality, ai, and the information ecosystem. 🤠 traeve.com 🤠

Bluesky Social
In a follow-up paper @travislloydphd.bsky.social also showed the great uptake in AI rules for all subreddits, especially the large ones, but -- importantly -- less so in subreddits that focus on social support. I think this is likely to change. 4/ dl.acm.org/doi/10.1145/...

AI Rules? Characterizing Reddi...
travis lloyd (træve) @ 🇯🇵CHI25🇯🇵 (@travislloydphd.bsky.social)

public interest technologist // luddite hacker. information science phd at cornell tech. inequality, ai, and the information ecosystem. 🤠 traeve.com 🤠

Bluesky Social
Finally, in one work-in-progress we should how people admit the use of AI and blame each other of AI use in subreddits dedicated to arts. This was smaller in scale than we anticipated -- but could still lead to detrimental effects for these communities. 5/ arxiv.org/abs/2410.07302

Examining the Prevalence and D...
Examining the Prevalence and Dynamics of AI-Generated Media in Art Subreddits

Broadly accessible generative AI models like Dall-E have made it possible for anyone to create compelling visual art. In online communities, the introduction of AI-generated content (AIGC) may impact social dynamics, for example causing changes in who is posting content, or shifting the norms or the discussions around the posted content if posts are suspected of being generated by AI. We take steps towards examining the potential impact of AIGC on art-related communities on Reddit. We distinguish between communities that disallow AI content and those without such a direct policy. We look at image-based posts in these communities where the author transparently shares that the image was created by AI, and at comments in these communities that suspect or accuse authors of using generative AI. We find that AI posts (and accusations) have played a surprisingly small part in these communities through the end of 2023, accounting for fewer than 0.5% of the image-based posts. However, even as the absolute number of author-labeled AI posts dwindles over time, accusations of AI use remain more persistent. We show that AI content is more readily used by newcomers and may help increase participation if it aligns with community rules. However, the tone of comments suspecting AI use by others has become more negative over time, especially in communities that do not have explicit rules about AI. Overall, the results show the changing norms and interactions around AIGC in online communities designated for creativity.

arXiv.org
AI detection will not work long-term as a solution, as AI can learn to avoid detection. We will need to develop new tools to show our human-ness online, or face another retreat from meaningful exchange /fin