Becca Ricks

@beccaricks
285 Followers
187 Following
27 Posts
Researcher, artist, technologist. ☺︎ Design futures w/ tendernet. Previously: open source research @mozilla. @hrw @ITP_NYU. she/her.
Websitebeccaricks.space/
Twitter@ baricks

RT @[email protected]

📣📣New release! “Parables of AI in/from the Majority World” is an anthology that brings together original stories about the everyday experiences of living with AI-based systems. https://datasociety.net/library/parables-of-ai-in-from-the-majority-world-an-anthology/ 1/6

🐦🔗: https://twitter.com/datasociety/status/1600539324445081614

Parables of AI in/from the Majority World: An Anthology

This anthology was curated from stories of living with data and AI in/from the majority world, narrated at a storytelling workshop in October 2021 organized by Data & Society Research Institute.

Data & Society

RT @[email protected]

hey, it's my official pub date! and the other bad website is doing its thing, letting me know that I'm #1 in "music appreciation," which is definitely what the book is about

🐦🔗: https://twitter.com/npseaver/status/1600150620622966784

Nick Seaver on Twitter

“hey, it's my official pub date! and the other bad website is doing its thing, letting me know that I'm #1 in "music appreciation," which is definitely what the book is about”

Twitter

RT @[email protected]

FB algorithms classify race, gender, and age of people in ad images and make delivery decisions based on the predictions. Images of Black folks are shown more to B users; images of children are delivered more to women; images of young women reach older men.

🐦🔗: https://twitter.com/sapiezynski/status/1584884796718972928

Piotr Sapiezynski on Twitter

“FB algorithms classify race, gender, and age of people in ad images and make delivery decisions based on the predictions. Images of Black folks are shown more to B users; images of children are delivered more to women; images of young women reach older men.”

Twitter

RT @[email protected]

Today, technical experts hold the tools to conduct system-scale algorithm audits, so they largely decide what algorithmic harms are surfaced. Our #cscw2022 paper asks: how could *everyday users* explore where a system disagrees with their perspectives? http://hci.st/end-user-audit 🧵

🐦🔗: https://twitter.com/michelle123lam/status/1584585389313904650