Last month, Director of Machine Learning at Trustible Anastassia Kornilova attended the ACM #FAccT2025 in Athens, Greece. Check out her takeaways from the conference: https://www.trustible.ai/post/facct-finding-2025?utm_content=338651154&utm_medium=social&utm_source=linkedin&hss_channel=lcp-88638823
FAccT Finding: AI Governance Takeaways from ACM FAccT 2025

Anastassia Kornilova is the Director of Machine Learning at Trustible. Anastassia translates research into actionable insights and uses AI to accelerate compliance with regulations. Her notable projects have involved creating the Trustible AI Governance Model Ratings and AI Policy Analyzer. Previously, she has worked at Snorkel AI developing large-scale machine learning systems, and at FiscalNote developing NLP models for legislation and regulation.

Trustible

✨ That's a wrap on the 2025 #ACM FAccT conference! We're thrilled to have brought together researchers, practitioners, and innovators from around the globe to explore the latest advances in fairness, accountability, and transparency in sociotechnical systems.

A huge thank you to all our attendees, presenters, sponsors, and organizers for making #FAcct2025 a resounding success! 👏

Until next time, keep pushing the boundaries of equitable tech for all 🚀

Ohhhh Molly Crockett‘s @mjcrockett.bsky.social keynote talk at #FAccT2025 was soo good 🔥🔥🔥 She talked about how techno-optimism is really human pessimism, how DEAD benchmarks don’t capture full human capacities and feed the hype cycle, how we need to avoid monoculture & imagine new worlds together
Great piece of inspiration from Molly Crockett's keynote at #FAccT2025.
Our final #FAccT2025 keynote is in half an hour, starting at 2PM Athens! Join us in the Amphitheatre to hear Molly Crockett speak about "Techno-optimism, human pessimism, and the worlds we imagine together."

Today is our last day at the #FAccT2025 conference 💔

As our time in Athens draws to a close, this year's program chairs would like to share some reflections on the FAccT 2025 review process in a new blog post.

https://facct-blog.github.io/2025-06-26/review-process

Reflections on the FAccT 2025 Review Process

We are here in Athens this week for the 8th annual ACM Conference on Fairness, Accountability, and Transparency. As we enjoy the presentations, we want to take a moment to reflect on our year as Program Chairs, the FAccT review process, and our recommendations for future conference organizers.

ACM FAccT Blog

Reward models stand in for human values when aligning LLMs using reinforcement learning with human feedback (RLHF). What values do these models actually encode and how do they compare to independent measures of human values?

Led by my colleague Brian Christian, at #FAcct2025 we present a novel approach to reward model interpretability to answer these questions.

We found wild disagreement among different models and a slew of biases and interesting patterns which raise questions about how reward models are trained and used.

Reward Model Interpretability via Optimal and Pessimal Tokens
by
Brian Christian, Hannah Rose Kirk, Jessica A.F. Thompson, Christopher Summerfield, Tsvetomira Dumbalska

https://arxiv.org/abs/2506.07326

@Khrys @sy Come to the #FAccT2025 and you will see the reason of that principle
More ideas on how you can #SaveTheAI from our #FAccT2025 participants @FAccT:
Raising crucial #AI awareness one step at a time! More #SaveTheAI advocates have joined our initiative thanks to yesterday’s CRAFT Session at #FAccT2025 — they’re helping spread the word about AI’s existential needs and what YOU can do to help 🤝💻 Thank you @FAccT for having us: https://savethe.ai/