Join FAIR Points, GO FAIR US @danielvanstrien on June 26th to learn how @huggingface works to ensure machine learning models can be more easily found, used, and built on by others. Register: https://us06web.zoom.us/meeting/register/tZMscO2uqzgvHNc0kVSLMChu3LrKePLLt73c #RDAPlenary #FAIRML #OpenScience #DataScience #libraries
Welcome! You are invited to join a meeting: The Hugging Face. After registering, you will receive a confirmation email about joining the meeting.

Speaker: Daniel van Strien, Machine Learning Librarian, Hugging Face The last ten years have seen machine learning making an increasingly significant impact across all areas of business and society. Machine learning also increasingly plays a role in producing new knowledge across the science and the humanities. The Hugging Face hub is a repository for sharing machine learning models, datasets and demos. It currently has over 150,000 models and 25,000 datasets made openly available for others to use and build on. These models cover a range of tasks, i.e. text classification and modalities, i.e. text, image, audio etc. This hub aims to help democratize access to machine learning. The open science movement has broadened the scope of which scholarly outputs are considered important to emphasize the data and software underpinning research findings. This scope will again need to be expanded to include machine learning models. In this webinar, Hugging Face's Machine Learning Librarian, Daniel van Strien, will discuss how Hugging Face works to ensure machine learning models can be more easily found, used and built on by others, i.e. how they can follow the FAIR principles. Hosted by FAIRPoints (https://fairpoints.org/). More information or inquiries please contact [email protected]

Zoom

Excerpts from the article:
The majority of algorithms developed to enforce “algorithmic fairness” were built without #policy and societal contexts in mind.

Our motivation for pursuing fairness is to improve the situation of a historically disadvantaged group.

When we build AI systems to make decisions about people's lives, our design decisions encode implicit value judgments about what should be prioritized.

Technical solutions are often only a Band-aid to deal with a broken system. Improving access to #HealthCare, curating more diverse data sets, and developing tools that specifically target the problems faced by historically disadvantaged communities can help make substantive fairness a reality.

#AI systems make life-changing decisions. Choices about how they should be fair, and to whom, are too important to treat #fairness as a simple mathematical problem to be solved.

#AlgorithmicFairness #MedicalSystem #AIEthics #FairML #ArtificialIntelligence

Article:
HealthCare #Bias Is Dangerous. But So Are ‘Fairness’ #Algorithms

https://www-wired-com.cdn.ampproject.org/c/s/www.wired.com/story/bias-statistics-artificial-intelligence-healthcare/amp

Paper:
The Unfairness of Fair #MachineLearning: Levelling down and strict egalitarianism by default

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4331652

Health Care Bias Is Dangerous. But So Are ‘Fairness’ Algorithms

WIRED

Good morning #Fediverse ! Hope you've all had a great weekend. In this morning's #ConnectionList #Introduction #Connections post, in which I help to more richly connect us together, I'd like you to meet:

@ZoeGlatt is a #PhD candidate and #DigitalEthnography #researcher, and founder of @DigitalEthnographyCollective. They (not assuming pronouns) are submitting in March and are interested in jobs researching #SocialMedia culture, #CreativeLabour and #CreativeIndustries

@ndporter is into #SocialScience, #Data and #DataEquity and works with the #Carpentries. The #Carpentries is a volunteer-driven movement to provide #researchers with foundational computational skills in technologies like #Python 🐍, #Git ⌨️ and #HPC #programming 💻

@pip works in #design for #AI at #CSIRO #ProductManagement

@safiyanoble is the author of "Algorithms of Oppression", which if you haven't read it, should be on your reading list, especially if you're into #AIethics or #FairML. It sits in a similar space to @VirginiaEubanks "Automating Inequality" and shows how algorithms serve to reinforce existing structures of oppression, particularly of POC.

@fionatribe is a #CulturalAnthropology person who is interested in #MaterialCulture, #Architecture and #Anthropology

@metasecsol works as an #InfoSec researcher at the #W3 and as a #Lecturer in #InformationSecurity #DevSecOps

Good evening, #Mastodon!

Here's another #connectionlist #connection #introduction to help keep get everyone in the  more deeply and richly connected.

@randomwalker is a researcher in #fairML #ML and his book (with others) on Fair Machine Learning is clear and concise: https://fairmlbook.org/

@aurynn is the #sysadmin of the Cloud Island instance and she works in #devops and #security. Kia ora! 🇳🇿

@cosmicpinot is the #ViceChancellor of ANU and a #Nobel Laureate for his work in #astronomy. Is also a #vintner, and may enjoy #dogsofmastodon 🇦🇺 by way of 🇺🇸

@alex is Director of #research at #DAIR, led by @timnitGebru 🇪🇬 ⚧ You can also see her scholarly work at:
https://scholar.google.com/citations?hl=en&user=PksNWIUAAAAJ

@drrimmer Matthew is a #Professor #academic of #IP and #innovation law at QUT in 🇦🇺 He is an expert on #copyright #law and #patents.

@mirandayaver is a #professor #academic in #polsci at Wheaton. They (not assuming pronouns) are interested in #health #policy 🇺🇸

@penguin_brian is a #dev and #devops person who loves #linux 🐧 They are into #Iot, #elixir and #rust

@lesleyhead is a #geographer #professor #academic at @[email protected] and is President of @[email protected]. Interested in #environment and #climatechange 🇦🇺

That's all for now! Please share your own lists under the hashtag #connectionlist so we can all get better connected in the #Fediverse 😄 

Fairness and machine learning

Gonna try my best at this #introduction
Hi everyone! I’m Melissa, a clinical ethicist, #qualitative and #quantitative researcher and Director of #HealthAI integration at The Hospital for Sick Children (SickKids). My work focuses on #FairML #ethics #bioethics #equity #pediatrics #medicine #healthcare #aiethics #justice
Hoping to continue learning from folks, so please connect 😃
Noticed people talking about #burnout in #fairML #ethicalAI work. For those interested in what’s happening a little south of San Fran, Stanford HAI has seminars. It gives me regular reminders that there are lots of ways to do good with models and machines and helps me stay hopeful https://hai.stanford.edu/events
Events

Stanford Institute for Human-Centered Artificial Intelligence
#introduction. I’m an independent researcher, interested in how we create #fairML / #ethicalAI with the goal of developing scalable and robust solutions. I like #individualFairness (Aristotle > Dwork 2011) as a concept, I think it holds the answer to how we should be thinking about predictive systems and fairness. I spent over a decade in financial risk, reviewing derivatives pricing models. I’m hoping we can do better with #sociotechnical systems. Londoner living in Mountain View.

The #FairML book by Barocas, Hardt, and Narayanan

https://fairmlbook.org

Fairness and machine learning

How To Get Your Résumé Past The Artificial Intelligence Gatekeepers

Résumés of highly qualified applicants are getting rejected by the initial automated screening processes many companies have in place. Now job seekers find themselves having to learn résumé submission optimization to please the algorithms and beat the bots.