Travis Lloyd

28 Followers
60 Following
10 Posts
PhD student at Cornell Tech + public interest technologist. Studying the health of the information ecosystem, especially the impact of generative AI. Past: engineering at Twitter, Amazon, Upsolve, Paladin PBC. he/him.
websitehttps://traeve.com
twitterhttps://twitter.com/travislloydphd
*NEW PAPER* How are online communities adapting to the presence of AI-generated content (AIGC)? To answer this question we collected the community rules for 300,000 public subreddits and identified rules governing the use of AI. /1
We offer a novel taxonomy of AI rule types and plan to make our datasets public after peer-review. Check out the preprint (w/ co-authors @Mor, Jennah and Tung) for more details about our findings and their implications: https://arxiv.org/abs/2410.11698 /fin
AI Rules? Characterizing Reddit Community Policies Towards AI-Generated Content

How are Reddit communities responding to AI-generated content? We explored this question through a large-scale analysis of subreddit community rules and their change over time. We collected the metadata and community rules for over $300,000$ public subreddits and measured the prevalence of rules governing AI. We labeled subreddits and AI rules according to existing taxonomies from the HCI literature and a new taxonomy we developed specific to AI rules. While rules about AI are still relatively uncommon, the number of subreddits with these rules more than doubled over the course of a year. AI rules are more common in larger subreddits and communities focused on art or celebrity topics, and less common in those focused on social support. These rules often focus on AI images and evoke, as justification, concerns about quality and authenticity. Overall, our findings illustrate the emergence of varied concerns about AI, in different community contexts. Platform designers and HCI researchers should heed these concerns if they hope to encourage community self-determination in the age of generative AI. We make our datasets public to enable future large-scale studies of community self-governance.

arXiv.org
The number of subreddits with AI rules has nearly doubled over the last 12 months. Larger subreddits, as well as those devoted to art and celebrity topics, are the most likely to have such rules. Rules about AI most often focus on regulating the use of AI images and raise as justification concerns about quality and authenticity. /2
*NEW PAPER* How are online communities adapting to the presence of AI-generated content (AIGC)? To answer this question we collected the community rules for 300,000 public subreddits and identified rules governing the use of AI. /1
Under the (Neighbor)hood: Hyperlocal Surveillance on Nextdoor by Marianne Aubin Le Quéré, Madiha Zarah Choksi, Travis Lloyd, Ruojia Tao, James Grimmelmann, Mor Naaman
@marianne, @travislloyd, @jtlg, @Mor
#ica24
Check out the preprint (w/ co-authors @Mor and @reagle) for details of these (and other) findings and a discussion of the implications for the future of online communities: https://arxiv.org/abs/2311.12702 /fin
"There Has To Be a Lot That We're Missing": Moderating AI-Generated Content on Reddit

Generative AI has begun to alter how we work, learn, communicate, and participate in online communities. How might our online communities be changed by generative AI? To start addressing this question, we focused on online community moderators' experiences with AI-generated content (AIGC). We performed fifteen in-depth, semi-structured interviews with moderators of Reddit communities that restrict the use of AIGC. Our study finds that rules about AIGC are motivated by concerns about content quality, social dynamics, and governance challenges. Moderators fear that, without such rules, AIGC threatens to reduce their communities' utility and social value. We find that, despite the absence of foolproof tools for detecting AIGC, moderators were able to somewhat limit the disruption caused by this new phenomenon by working with their communities to clarify norms. However, moderators found enforcing AIGC restrictions challenging, and had to rely on time-intensive and inaccurate detection heuristics in their efforts. Our results highlight the importance of supporting community autonomy and self-determination in the face of this sudden technological change, and suggest potential design solutions that may help.

arXiv.org
Our participants perceived both good and bad-faith motivations behind AIGC use. Some felt that it could increase participation, but others saw it as a serious threat to the health of their community. Without a foolproof way to identify AIGC, our participants rely on heuristics that will likely become less effective as generative AI technology improves. /2
*NEW PAPER* Since the launch of ChatGPT, online communities are reckoning with an influx of AI-generated content (AIGC). How are they dealing with it? To find out, I interviewed 15 community moderators from top subreddits to discuss how they and their communities are responding. /1
Hello Mastodon world! Let's give this decentralized social media thing a shot :)