It's 2023. "Gosh, we didn't realize how people would misuse this" just isn't believable anymore.

Bare minimum, with any new tech:
1) How would a stalker use this?
2) What will 4chan do with this?

And don't release, not even as alpha or beta, before mitigating those risks.

https://www.theverge.com/2023/1/31/23579289/ai-voice-clone-deepfake-abuse-4chan-elevenlabs

#AIethics #ethNLP

4chan users embrace AI voice clone tool to generate celebrity hatespeech

AI voice cloning software is improving rapidly — as is its accessibility. 4chan users recently discovered free software that lets them clone the voices of celebrities like Joe Rogan and Emma Watson, generating audio samples ranging from hatespeech to erotica.

The Verge
@emilymbender Sadly we live in a world where corporate harm is just the price everyone pays so VCs can get their returns, CEOs can get bonuses and the very wealthy can create monopolies to hold everyone but the very wealthy hostage.

@emilymbender

AI is a tool and it’s rather difficult to make sure that a digital tool won’t fall into the hands of someone with ill intent

Photoshop has been publicly available for years now, and people initially thought it would destroy the integrity of all photos

Now in retrospect we can see that it didn’t, it’s not unreasonable to see AI as a similar case

Dystopia is often more boring than we think it will be, in the case of photoshop we got airbrushed instagram photos

@zamallama @emilymbender Yes, but this misses Emily's point: there is an ethical responsibility to mitigate these harms. It's not sufficient to throw up hands and say "we can't stop people from misusing tools".
@mmisamore @zamallama @emilymbender You actually cannot. It's just matter of time - sources will leak, tool will be pirated or someone just will replicate it using basic principles from published papers/patents. And the more power a tool brings to you then faster it will get to the black market. Welcome to the cyberpunk. The most hilarious thing about this thread - the Deepfake-like tools are available for ages, used many times for defamation, steal, unauthorized access etc. and TS started moral panic just now. And seems to be completely uneducated about how corp/society power imbalance works.

@mmisamore @zamallama @emilymbender

You wouldn't download a car...

Yes, yes I absolutely would...

Mitigating harms with technical solutions works how well and how often?

You mitigate these harms with social and economic solutions not technical ones that people will engineer their way around.

@zamallama @emilymbender This is a common but inexact comparison, I think, for a few reasons (e.g., Photoshop requiring more skill to manipulate images than it takes to type a prompt into an AI tool). A more apt comparison might be another Adobe tool, Voco, which was demoed but never released to the public: https://en.wikipedia.org/wiki/Adobe_Voco
Adobe Voco - Wikipedia

@scotian @emilymbender

I watched a 13 minute long face swap tutorial in the 8th grade and used my 🏴‍☠️ copy of Photoshop

I was able to do a convincing face swap my math teacher for a meme in less than two hours

With all the updates since then it’s even easier now, there’s free apps that do what photoshop was doing back then

I can doctor iMessage screenshots in my default photos app now lol

@zamallama @emilymbender Right, that's where we ended up after ~30 years since the initial release of Photoshop. And we are all 30 years wiser in terms of spotting this stuff.

My point is it's a slightly strained comparison because these AI tools are, relatively, much newer, easier to use, and collectively we have not developed our spidey senses to know what is and isn't legitimate. So trying to build a smart, consistent ethical framework makes sense (as a first step, not an afterthought).

@scotian @emilymbender

Yeah I agree that would be good, I’m just saying typically new technology doesn’t go the scifi dystopia/utopia route like people expect it to

The internet could’ve made most colleges obsolete, instead we pay to watch professors hit play on YouTube videos

Social media could’ve created a united global community, instead we got a loneliness epidemic

Widespread surveillance ends up being used to serve you tailored ads, etc

AI will probably follow suit

@zamallama @emilymbender Absolutely. We don't have to stop the march of technological progress, so long as we give due consideration to possible harms that could be introduced. In this sense, Photoshop is a good exemplar of society (eventually) getting it right, e.g., newsrooms introduced policies about sourcing and verifying images before publishing that account for possible digital manipulation.
@zamallama @emilymbender
It's realatively easy to spot a photoshopped image, but probably less so for AI output. For many devs, the focus of the technology seems to be to make it undetectable, not to make it useful.

@pshanks @emilymbender

You’re oversimplifying things, in various use cases being undetectable makes AI more useful, including:

- Customer service chatbots
- Language translation
- Personal assistants
- Sales/marketing representatives
- Video game AI
- Virtual news anchors
- Virtual interviewers/recruitment bots
- Virtual therapists
- Voice-controlled devices
- Content creation for social media

(This list was written by ChatGPT)

@emilymbender I absolutely agree about the ethics, but I hadn’t heard of this, so had to try it - and holy s**t it’s good. I work with dyslexic teens, and the number of times I have to convince them to use immersive reader even if it can sound “weird”.. this is a million times better, even with fiction and poetry. Opens up a new world for people with reading difficulties or visually impaired.
@emilymbender What safeguards are possible in these cases? This is essentially a general issue with any voice synthesis. Are detection programs enough? The company's other idea of limiting who can use their tech sounds unfeasible and really counter to why people develop software in the first place.

@joshisanonymous Hmm --- a license at minimum? Not just putting it up for free? Limits on what voices can be used, so that people's voices don't get used without their permission?

And "software" is a very broad category. Your claim that people develop it so everyone can use it seems to come from a place where you can't imagine that software is harmful and would need regulation.

@emilymbender I was thinking licensing, too, since that helps track who is using it in harmful ways.

I was trying to compare it to regulating guns. It seems much harder with software since others will develop the same tech, there's open source, the difficulty in categorizing new tech so that laws can regulate those categories, etc.

I don't doubt that software can cause harm; it just honestly feels insurmountable. (Sorry, I'm sure you've published on this a million times.)

@emilymbender @joshisanonymous a license limiting how people can use the software would be a step backward. we’ve been fighting for free/libre/open-source for a long time, and for good reasons. the alternative is arbitrary corporate control over access.

in the absence of FLOSS, independent research gets choked off, since normal people can’t dig into how AI works. this also gives big tech a monopoly over the latest, most powerful AI tools, which is obviously bad.

@tech_himbo @joshisanonymous the question isn't who gets to use which software but who gets to use whose voice. Obviously.

@emilymbender if source code and training data are openly available, someone can reproduce the product with the safety restrictions turned off, train it on any new voice of their choice, and publish their “unlocked” version and allow others to use it.

ultimately there’s an unavoidable choice between FLOSS and limits on use. to have one, you give up the other. this includes legal limits on use, since if the license disallows certain uses, it’s no longer a FLOSS license.

@tech_himbo I see I've run into a missionary from the fundamentalist church of FLOSS.

Rigidity will solve no problems. It is meaningful to have some software open source even if there are limits on others.

Good day.

@emilymbender you’re right, but the broader point isn’t about meaning — it’s about what happens in the cases when access is closed

if ai becomes as foundational to future tech as web protocols are to current tech, and there’s a de facto cartel controlling who can tinker, or even peek under the hood, that is a dystopian outcome

at minimum, it entails all the problems of monopoly capitalism

pointing this out is not “fundamentalism;” it’s a statement of fact

@tech_himbo That's a whole lot of "ifs" to justify doing nothing about something that we *know* causes harm.

The freedom to use software doesn't trump all other freedoms and rights.

@johnpettigrew we also *know* big tech is deliberately integrating ai into every level of their products already. it’s not a question of whether ai will become foundational; it’s a question of when, and on whose terms. in many ways, ai already *is* foundational.

the position “we should close off what was previously open access” entails conceding the foundations of emerging tech to monopolistic corporations. that is straightforwardly bad, and much worse than internet trolling.

@tech_himbo You're not really talking about the issues the OP raised. If AI has problems, which you seen to concede, it's surely sensible to try and work out how to minimise that, rather than just throw up our hands in resignation.

@johnpettigrew i’m criticizing a proposed solution (licensing) by pointing out the likely effect on the overall course of ai development

any conversation about misuse of ai has to consider root causes. spam is caused by the profit motive. harassment, largely by sexism/racism. deepfake porn, by patriarchal entitlement to others’ bodies

addressing these social issues with a license agreement is like trying to cure cancer with liquor; you may mask symptoms, but the disease remains

and, to round out the analogy, the damage from getting drunk may outweigh the temporary relief from your symptoms
@tech_himbo But your criticism is based on the assumption that licensing is a Bad Thing in all contexts. And, most particularly, that the potential bad effects of licensing will without doubt be worse than the known bad effects of AI itself.

@johnpettigrew if you’d like evidence of closed licenses being bad, the #RightToRepair movement has catalogued the damage — millions of tons of e-waste to small farms being driven out of business by tractor manufacturers. mastodon itself is an example of open working where closed fails.

openness is part of everything from environmentalism and left-economics to arts and culture. giving it up would be more destructive than any amount of voice impersonation, especially under capitalism

@tech_himbo Once again, you're generalising from particular problems to a universal wrong. Can you at least entertain the possibility that other things could be bad, too? Or that one might need to put up with a lesser evil to deal with a great one (like AI)?

@johnpettigrew nobody’s disputing that ai misuse is bad. my claim, to be as clear as possible, is that in the choice between closed licensing and ai misuse, ai misuse is the lesser evil.

furthermore, licensing is not the best way to address ai misuse, because the root causes of ai misuse are broader social systems. our efforts would be better directed towards maintaining openness while tackling the social problems that cause ai misuse, rather than slapping licenses on as a band-aid.

@tech_himbo And that's the problem. Licenses have been around for centuries without creating unmanageable evils. But AI is causing real, direct harm to people now. It may not be you or your friends, but it's great harm nonetheless.

But it sounds like your concern is to avoid licenses at all costs.

@johnpettigrew restrictive licenses, especially for technologies, do in fact create immense harms, as i’ve already illustrated. to reiterate: licensing is the weapon microsoft used to snuff out pc competition and obtain a monopoly. likewise for many other companies. it also causes waste, and slows innovation

if you care less about corporate power than ai, defend that position instead of pretending that closed licensing isn’t a tremendous economic, environmental, and social burden

@tech_himbo You miss the point. Licenses can cause harm, we've agreed (but also serve a useful social purpose, which you don't seem to admit). But you seem unwilling to accept that AI can cause more, and more direct, harms, and to more people.

Licenses harm companies and developers. AI has the potential to harm *everyone*, but particularly those who are already disadvantaged. That's the difference.

But social media isn't a venue for such debates, realistically.

@johnpettigrew licenses harm end users (i.e. almost everyone), not just companies and developers. the #RightToRepair discourse is a great case in point. by limiting the ways tech can be used, corporations impose pointless costs on ordinary people. analogously, a license closing off sources to ai (which would be required in order to stop misuse at a meaningful scale) grants a monopoly to the company writing that license. this is a broad social harm, not just an inconvenience for devs.
@emilymbender @tech_himbo You're not getting how tech works completely. Your sweet "LICENSES!!!" won't prevent a bunch of sociopathic tech-savvies from 4chan to reproduce *so scary tech* by just looking at patents and scientific papers. It's how science works. And it happens actually. People are re-training leaked/pirated GANs to make porn images just now as we speak, despite initial versions were censored. You need to classify science to safeguard every hard corner. Corps have more computational power, but crowdsourcing works just good enough (see "LOIC").

@emilymbender @joshisanonymous This is a really hard problem. To give just one example: The Linux desktop could very much benefit from better accessibility tools. However, for an accessibility tool to be accessible (pun not intended) to end users, it needs to be part of the distribution package repositories, and that generally requires that it be open source. Furthermore, accessibility tools should operate locally on one’s own system for obvious privacy reasons, so moving the real work to the cloud is not an option either.

(Edit: To be clear, I DO NOT condone or support cloning someone’s voice without their freely-given informed consent. This comment is about voice synthesis, which is an incredibly useful accessibility tool.)

@joshisanonymous
And why is it that people develop software in the first place? Are you going to say "because they can"? Or "for people to use?", or something that means as much? Because there are a lot of things that we can make, but hat doesn't mean we
should make them.

Recognizing the use cases and the potential harms, and that often the harms well, well, outweigh the potential legitimate use cases is part of design.

If programmers want to make software, maybe they should learn how to
design shit first, rather than just going to town mindlessly on some code.
@emilymbender
@kichae @emilymbender Weighing harms against benefits is a good idea before deciding to develop. In the case of voice cloning, the only particularly good use case I can think of is giving people their voices back when they've lost them. (For voice synthesis in general, there are tons of use cases.) Maybe that's not enough to outweigh potential harms? In that case, how do we prevent ANYONE from developing that tech? Because someone will even if most can be convinced that it's unethical.

@emilymbender In our interviews for The Secret Life of Data (coming out from MIT Press next year), @jesse and I asked all of our expert sources "what would you do with your knowledge & expertise if you were a supervillain?"

Shockingly few of them had an answer. Most said something along the lines of "gee, I never thought about it, really."

@aramsinn @emilymbender @jesse And do supervillains actually exist?

There is this narrative going on that all problems are caused by some people that need to be identified and eliminated to solve the problem.

But more and more evidence points in the direction that there is not any supervillain - there is just a culture that pits people against each other inciting violence. Then you get people who find creative ways how to abuse anything for that violence.

@emilymbender

8< -- snip --The company claims it can “trace back any generated audio back to the user,” -- snip --

I do like the notion of traceability/auditing. providing a general lookup feature would be nice. add company responsibility for what's generated as well.

@emilymbender How come trade marks enjoy better protection than this?
@emilymbender On a slightly related note - can my grandmother figure out how to use this?
@queerscifi ah, some casual ageism and misogyny. Just what I needed this morning! *sigh*

@emilymbender Sorry - not my intent at all. It actually came from my aunt, who is in her seventies, and who suggested that all software be beta tested in nursing homes. Her point being that software engineers often make software needlessly complicated, and should be considering all users.

My grandmother was a very capeable person - she was on Facebook and active there in her late eighties, and had her own blog.

@queerscifi Right -- so there are ways to express that idea that don't involve "your grandmother".

@emilymbender That's fair. I was just trying to personalize it, and having a fond memory of her at the same time.

My aunt said it should be "beta tested in nursing homes."

I still miss my Grandma Joyce after ten years, and so that's how it came out.

Sorry for causing you distress.

@emilymbender

Thrilling + concurrently frightening

"In The Verge’s own tests...use #ElevenLabs platform to clone targets’ voices in a matter of seconds and generate audio samples containing everything from threats of violence 2 expressions of racism + transphobia...we created a voice clone of President Joe Biden +... generate audio that sounded like the president announcing an invasion of Russia... illustrating how the technology could B used 2 spread misinformation."

https://www.theverge.com/2023/1/31/23579289/ai-voice-clone-deepfake-abuse-4chan-elevenlabs

4chan users embrace AI voice clone tool to generate celebrity hatespeech

AI voice cloning software is improving rapidly — as is its accessibility. 4chan users recently discovered free software that lets them clone the voices of celebrities like Joe Rogan and Emma Watson, generating audio samples ranging from hatespeech to erotica.

The Verge

@HistoPol @emilymbender Except AI voice generation is not needed to spread misinformation. Disinformators have deepfakes available for a long time but using them is needless hassle.

Like there is that litterboxes in school controversy - there is no actual evidence, only conservatives authoritatively claiming it's happening.

Using good, intelligible voice generation could improve the lives of people that are visually impaired.

Also train stations would not have to rely on prerecorded audio.

@emilymbender I doubt this is going to be contained to celebs. I'd wager a bet that some students somewhere will wield this against a Prof, a crank in an HOA, an angry neighbor in city politics, etc. Basically, any AI-aware person with an axe to grind can fire up their generative "weapon of mass disinformation" and smite their foes. Bad things are 100% coming and I fully agree that is a design ethics issue, as there is absolutely no excuse of not thinking of this.

@emilymbender I like how their first idea was to try starting WW3.

Journalists eh...

@emilymbender
That’s it. I’m going back to bed. 
@emilymbender I don't even recommend having pictures on the 'net. And I have considered using my audio tools to slightly change my voice for any communication.

@emilymbender

3) How many ways to Sunday can a deranged toxic male billionaires screw up millions of users after getting forced into buying it after a rubbish 420 joke?

@emilymbender I believe it is William Gibson who said, "The street always finds its own uses for technology."

Apparently, so does the sewer. 🤦

@emilymbender they know the risks. they don't care and want to sell out.
@emilymbender "we didn't realize that OUR tech would be misused like ALL OTHER TECH"
@emilymbender
Let's call it "the 4Chan test".
@emilymbender "if only we could have foreseen that the Torment Nexus we created would be used for TORMENT"