It's 2023. "Gosh, we didn't realize how people would misuse this" just isn't believable anymore.

Bare minimum, with any new tech:
1) How would a stalker use this?
2) What will 4chan do with this?

And don't release, not even as alpha or beta, before mitigating those risks.

https://www.theverge.com/2023/1/31/23579289/ai-voice-clone-deepfake-abuse-4chan-elevenlabs

#AIethics #ethNLP

4chan users embrace AI voice clone tool to generate celebrity hatespeech

AI voice cloning software is improving rapidly — as is its accessibility. 4chan users recently discovered free software that lets them clone the voices of celebrities like Joe Rogan and Emma Watson, generating audio samples ranging from hatespeech to erotica.

The Verge
@emilymbender What safeguards are possible in these cases? This is essentially a general issue with any voice synthesis. Are detection programs enough? The company's other idea of limiting who can use their tech sounds unfeasible and really counter to why people develop software in the first place.

@joshisanonymous Hmm --- a license at minimum? Not just putting it up for free? Limits on what voices can be used, so that people's voices don't get used without their permission?

And "software" is a very broad category. Your claim that people develop it so everyone can use it seems to come from a place where you can't imagine that software is harmful and would need regulation.

@emilymbender @joshisanonymous This is a really hard problem. To give just one example: The Linux desktop could very much benefit from better accessibility tools. However, for an accessibility tool to be accessible (pun not intended) to end users, it needs to be part of the distribution package repositories, and that generally requires that it be open source. Furthermore, accessibility tools should operate locally on one’s own system for obvious privacy reasons, so moving the real work to the cloud is not an option either.

(Edit: To be clear, I DO NOT condone or support cloning someone’s voice without their freely-given informed consent. This comment is about voice synthesis, which is an incredibly useful accessibility tool.)