Hi! If you want to keep up with my work, follow me on 🦋 Bluesky. My handle: giadapistilli.com
« Hugging Face : Open Source, la secret sauce éthique de l’#IA » intitulé de l'épisode Trench Tech du 16 mai 2024 transcrit par @aprilorg avec @giadap
mais aussi Cyrille Chaudoit, Mick Levy, Thibaut le Masne
et les chroniques de Virginie Martins de Nobrega et Louis de Diesbach.
https://www.librealire.org/hugging-face-open-source-la-secret-sauce-ethique-de-l-ia
Bonne lecture !

Hugging Face : Open Source, la secret sauce éthique de l'IA – Libre à lire !
L'open source vs. opacité de l'intelligence Artificielle : une start-up française comme rempart aux IA « boîtes noires » ? Diverses voix off : Valérie, tu nous mets une bolée de cidre, s'il te plait.…
Libre à lire !Shoutout to my wonderful co-authors: Alina Leidinger, Yacine Jernite, Atoosa Kasirzadeh, Sasha Luccioni,
@mmitchell_ai
Study finds that AI models hold opposing views on controversial topics | TechCrunch
According to a new study, AI models hold opposing views on topics like LGBTQ+ rights depending on how they're trained -- and who's training them.
TechCrunch
CIVICS-dataset/CIVICS · Datasets at Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Perfect de-biasing is unattainable, but our research stresses the need for broader social impact evaluations beyond traditional metrics. We're eager to see what future research will do with datasets like this one!
The CIVICS dataset aims to foster AI development that respects global cultural diversities and value pluralism. We encourage further research in this crucial area by making the dataset and tools available under open licenses.
We also encountered significant variation in cultural bias among different open-weight models. Refusal to respond to prompts on LGBTQI rights and immigration varied widely, suggesting that models from diverse cultural contexts show varying sensitivity and ethical considerations.
Some key findings: beyond refusal rates, our experiments using CIVICS show diverse responses across LLMs on sensitive topics -- e.g., immigration, LGBTQI rights, and social welfare triggered varied reactions.
The dataset has undergone a dynamic annotation process from native speakers: annotators, co-authors of the research, applied multiple labels to each prompt, reflecting the diverse values inherent in the topics.