It's 2023. "Gosh, we didn't realize how people would misuse this" just isn't believable anymore.

Bare minimum, with any new tech:
1) How would a stalker use this?
2) What will 4chan do with this?

And don't release, not even as alpha or beta, before mitigating those risks.

https://www.theverge.com/2023/1/31/23579289/ai-voice-clone-deepfake-abuse-4chan-elevenlabs

#AIethics #ethNLP

4chan users embrace AI voice clone tool to generate celebrity hatespeech

AI voice cloning software is improving rapidly — as is its accessibility. 4chan users recently discovered free software that lets them clone the voices of celebrities like Joe Rogan and Emma Watson, generating audio samples ranging from hatespeech to erotica.

The Verge
@emilymbender What safeguards are possible in these cases? This is essentially a general issue with any voice synthesis. Are detection programs enough? The company's other idea of limiting who can use their tech sounds unfeasible and really counter to why people develop software in the first place.

@joshisanonymous Hmm --- a license at minimum? Not just putting it up for free? Limits on what voices can be used, so that people's voices don't get used without their permission?

And "software" is a very broad category. Your claim that people develop it so everyone can use it seems to come from a place where you can't imagine that software is harmful and would need regulation.

@emilymbender @joshisanonymous a license limiting how people can use the software would be a step backward. we’ve been fighting for free/libre/open-source for a long time, and for good reasons. the alternative is arbitrary corporate control over access.

in the absence of FLOSS, independent research gets choked off, since normal people can’t dig into how AI works. this also gives big tech a monopoly over the latest, most powerful AI tools, which is obviously bad.

@tech_himbo @joshisanonymous the question isn't who gets to use which software but who gets to use whose voice. Obviously.

@emilymbender if source code and training data are openly available, someone can reproduce the product with the safety restrictions turned off, train it on any new voice of their choice, and publish their “unlocked” version and allow others to use it.

ultimately there’s an unavoidable choice between FLOSS and limits on use. to have one, you give up the other. this includes legal limits on use, since if the license disallows certain uses, it’s no longer a FLOSS license.

@tech_himbo I see I've run into a missionary from the fundamentalist church of FLOSS.

Rigidity will solve no problems. It is meaningful to have some software open source even if there are limits on others.

Good day.

@emilymbender you’re right, but the broader point isn’t about meaning — it’s about what happens in the cases when access is closed

if ai becomes as foundational to future tech as web protocols are to current tech, and there’s a de facto cartel controlling who can tinker, or even peek under the hood, that is a dystopian outcome

at minimum, it entails all the problems of monopoly capitalism

pointing this out is not “fundamentalism;” it’s a statement of fact

@tech_himbo That's a whole lot of "ifs" to justify doing nothing about something that we *know* causes harm.

The freedom to use software doesn't trump all other freedoms and rights.

@johnpettigrew we also *know* big tech is deliberately integrating ai into every level of their products already. it’s not a question of whether ai will become foundational; it’s a question of when, and on whose terms. in many ways, ai already *is* foundational.

the position “we should close off what was previously open access” entails conceding the foundations of emerging tech to monopolistic corporations. that is straightforwardly bad, and much worse than internet trolling.

@tech_himbo You're not really talking about the issues the OP raised. If AI has problems, which you seen to concede, it's surely sensible to try and work out how to minimise that, rather than just throw up our hands in resignation.

@johnpettigrew i’m criticizing a proposed solution (licensing) by pointing out the likely effect on the overall course of ai development

any conversation about misuse of ai has to consider root causes. spam is caused by the profit motive. harassment, largely by sexism/racism. deepfake porn, by patriarchal entitlement to others’ bodies

addressing these social issues with a license agreement is like trying to cure cancer with liquor; you may mask symptoms, but the disease remains

and, to round out the analogy, the damage from getting drunk may outweigh the temporary relief from your symptoms
@tech_himbo But your criticism is based on the assumption that licensing is a Bad Thing in all contexts. And, most particularly, that the potential bad effects of licensing will without doubt be worse than the known bad effects of AI itself.

@johnpettigrew if you’d like evidence of closed licenses being bad, the #RightToRepair movement has catalogued the damage — millions of tons of e-waste to small farms being driven out of business by tractor manufacturers. mastodon itself is an example of open working where closed fails.

openness is part of everything from environmentalism and left-economics to arts and culture. giving it up would be more destructive than any amount of voice impersonation, especially under capitalism

@tech_himbo Once again, you're generalising from particular problems to a universal wrong. Can you at least entertain the possibility that other things could be bad, too? Or that one might need to put up with a lesser evil to deal with a great one (like AI)?

@johnpettigrew nobody’s disputing that ai misuse is bad. my claim, to be as clear as possible, is that in the choice between closed licensing and ai misuse, ai misuse is the lesser evil.

furthermore, licensing is not the best way to address ai misuse, because the root causes of ai misuse are broader social systems. our efforts would be better directed towards maintaining openness while tackling the social problems that cause ai misuse, rather than slapping licenses on as a band-aid.

@tech_himbo And that's the problem. Licenses have been around for centuries without creating unmanageable evils. But AI is causing real, direct harm to people now. It may not be you or your friends, but it's great harm nonetheless.

But it sounds like your concern is to avoid licenses at all costs.

@johnpettigrew restrictive licenses, especially for technologies, do in fact create immense harms, as i’ve already illustrated. to reiterate: licensing is the weapon microsoft used to snuff out pc competition and obtain a monopoly. likewise for many other companies. it also causes waste, and slows innovation

if you care less about corporate power than ai, defend that position instead of pretending that closed licensing isn’t a tremendous economic, environmental, and social burden

@tech_himbo You miss the point. Licenses can cause harm, we've agreed (but also serve a useful social purpose, which you don't seem to admit). But you seem unwilling to accept that AI can cause more, and more direct, harms, and to more people.

Licenses harm companies and developers. AI has the potential to harm *everyone*, but particularly those who are already disadvantaged. That's the difference.

But social media isn't a venue for such debates, realistically.

@johnpettigrew licenses harm end users (i.e. almost everyone), not just companies and developers. the #RightToRepair discourse is a great case in point. by limiting the ways tech can be used, corporations impose pointless costs on ordinary people. analogously, a license closing off sources to ai (which would be required in order to stop misuse at a meaningful scale) grants a monopoly to the company writing that license. this is a broad social harm, not just an inconvenience for devs.