Will Landecker

469 Followers
354 Following
169 Posts

Technical consulting for responsible AI: https://www.accountablealgorithm.com/

algorithmic accountability. interpretable ML and AI ethics. crack open the black box and judge it.

#pdx pacific northwesterner, born and raised in oakland.

#woodworking #fermentation (#miso, #cider, #koji). #synthesizers. #gardening. #rugby (union, league, and touch). #antiracism. #antisexism.

previous lives @ nextdoor, stripe, lyft, twitter.

parfois en français.

@[email protected]

Websitehttps://www.accountablealgorithm.com/
LinkedInhttps://www.linkedin.com/in/will-landecker/
Bloghttps://www.accountablealgorithm.com/blog
PronounsHe / him / his

I need a nonprofit messaging + fundraising consultant! Anyone have recommendations?

Ideally this person has familiarity with philanthropists in the tech + social good space, and can help create consistent messaging, a prospectus, pitch deck, website messaging, etc.

“1 or larger”
Remember when two bars of cell service used to mean that your phone would still function?

I made an XOR-tray!

Elm walls, plywood bottom. Simple, single dovetail joints.

For the grooves’ through-holes, I tried to grain-match and plug after assembling. I didn’t like cutting and planing such small plugs. I’ll probably just chisel out stopped grooves instead next time.

Also I am finally going to admit that I prefer transferring the tails to the pins with a pencil, rather than a knife. I understand a knife should be more accurate, but I just can’t get it right.

#woodworking

Who is giving small-medium grants ($10k or so) for research on topics like AI interpretability, evaluation, safeguards, fairness and the like?

Last week at the Santa Fe Institute I learned of some great research. Many would benefit from a small grant. Some might hire a SWE, or accelerate with increased rate limits on LLM APIs, or bridge their theories into more industry-focused examples.

If you know anyone giving grants in this space, I’d love to connect them with some very worthy projects.

This week I had the pleasure of being invited to speak at the Santa Fe Institute about how the fields of Responsible AI and AI Ethics have shaped, and been shaped by, the last ten years of AI development in Silicon Valley. We discussed disparate impact, the role of regulation, the toll of AI on human data workers and the environment, and the technical tools used by tech companies to detect and remediate biased algorithms.

Thanks to Cris Moore and Melanie Mitchell for hosting me!

Accuracy Isn’t Enough

YouTube

OK, I know I was only promoting this podcast last week, but I just listened to the whole thing front-to-back and I really enjoyed it. I think some of you might too.

I was on the Ethical Machines podcast, where I talked about interpretable machine learning, bias feedback loops, the social and interdisciplinary backbone that generative AI needs, and receiving yellow cards in rugby. It was a very fun conversation.

I hope you'll listen to it and let me know what you think. Links below...

I was interviewed on the Ethical Machines podcast! I talked with Reid about AI evaluation, the product-data-algorithm bias feedback loop, interpretable ML, and why I think it’s an important moment for non-STEM folks to be involved in tech.

Video interview here: https://youtu.be/1lyQDSr-Ehg

Accuracy Isn’t Enough

YouTube
Self prescribed: takenoko gohan with fried chicken skin.