Dataset documentation fans, please check out "Data Statements: From Technical Concept to Community Practice" (McMillan-Major, Bender & Friedman 2023) -- reporting on how we took data statements v1 to v2 through learning with and from practitioners.

https://dl.acm.org/doi/10.1145/3594737

#nlp #DataDocumentation #ethnlp

Data Statements: From Technical Concept to Community Practice | ACM Journal on Responsible Computing

Responsible computing ultimately requires that technical communities develop and adopt tools, processes, and practices that mitigate harms and support human flourishing. Prior efforts toward the responsible development and use of datasets, machine ...

ACM Journal on Responsible Computing

Since we published the #StochasticParrots paper two years ago, the issues discussed in it have only become more urgent and salient. Join me, @timnitGebru @meg Angelina McMillan-Major and an esteemed group of panelists for discussion and reflection, March 17 2023.

https://bit.ly/ParrotsDay23

#AI #ML #AIhype #MathyMath #EthicalAI #AIethics #ethNLP

Stochastic Parrots Day

A virtual event to commemorate the 2nd anniversary of the paper, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

Eventbrite

Thank you, @lizweil for digging into the topic of "AI" in our current moment and highlighting how humanist (and linguistic) perspectives can bring as society grapples with important discussions.

https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html

#AIhype #ChatGPT #MathyMath #ethNLP #NLP #linguistics

๐Ÿฆœ ๐ŸŽฒ โ€œIf a large language model, endowed with hundreds of billions of parameters and trained on a very large dataset, can manipulate linguistic form well enough to cheat its way through tests meant to require language understanding, have we learned anything of value about how to build machine language understanding or have we been led down the garden path?โ€ On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ๐Ÿฆœ https://dl.acm.org/doi/10.1145/3442188.3445922 #NLP #ethNLP #GPT3
On the Dangers of Stochastic Parrots | Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency

ACM Conferences

That aside, it's a really good piece (and definitely not par for the course for NYT tech coverage). It shows the value of the work done by the FATE team at Micorosft, as well as how problematic it is that their ability to influence corporate decision making is limited.

#AIethics #ethNLP

Update! I'm really happy to see that the shared task has been removed from the workshop (though "shared tasks papers" is still listed on their web page) and the list of topics of interest has changed.

Workshop URL: https://en.sce.ac.il/news/iact23

Original thread: https://dair-community.social/@emilymbender/109908808556715839

#NLP #ethNLP #AIEthics #SIGIR2023

This workshop is associated with SIGIR. Has SIGIR somehow missed the memo about ethics review?

#SIGIR2023 #AIethics #NLP #ethNLP

Current #trending #tags on #fediverse

#caturday 1.625 people ๐Ÿ˜ธ
#Caturday 853 people
#BandcampFriday 201 people
#indiantreaties 137 people
#cop27 135 people
#aiethics 120 people
#streets 110 people
#ethnlp 109 people
#WindowFriday 108 people
#bikelanes 108 people

๐Ÿ” #Boost โค๏ธ

It's 2023. "Gosh, we didn't realize how people would misuse this" just isn't believable anymore.

Bare minimum, with any new tech:
1) How would a stalker use this?
2) What will 4chan do with this?

And don't release, not even as alpha or beta, before mitigating those risks.

https://www.theverge.com/2023/1/31/23579289/ai-voice-clone-deepfake-abuse-4chan-elevenlabs

#AIethics #ethNLP

4chan users embrace AI voice clone tool to generate celebrity hatespeech

AI voice cloning software is improving rapidly โ€” as is its accessibility. 4chan users recently discovered free software that lets them clone the voices of celebrities like Joe Rogan and Emma Watson, generating audio samples ranging from hatespeech to erotica.

The Verge

@yoavartzi I think the journal should ask authors to retract / correct author list

#ethnlp #aiethics