OpenAI holds back wide release of voice-cloning tech due to misuse concerns
Voice Engine can clone voices with 15 seconds of audio, but OpenAI is warning of potential misuse.
OpenAI holds back wide release of voice-cloning tech due to misuse concerns
Voice Engine can clone voices with 15 seconds of audio, but OpenAI is warning of potential misuse.
@arstechnica
I remember OpenAI holding back GPT from public release in a very similar fashion: it’s too powerful, concerns over misuse, etc. Then they went ahead and released it to the public anyway, with all the potential abuses barely mitigated and now materializing exactly as predicted.
At this point, I can’t view this sort of “It’s too powerful to be public!!” statement as anything but prerelease marketing hype.
To be clear, I’m very much in favor of ethical non-release of dangerous tech.
If you’re going to do that, the way to do it is not to send out a press release. It’s to keep your dangerous discovery somewhere between low-key and confidential, and get your research community working to develop mitigations and countermeasures •before• your creation wanders into the view of investors and militaries.
@inthehands @arstechnica lol someone released a pretty damn good voice cloner today with source available
Saw it on hacker news
Worked with 2-3 seconds of reference audio, the output is immediately recognizable although a bit staticy.
Static doesn’t matter though, I bet it’s good enough for phishing
@arstechnica I'm sure interested parties can gain access to this super dangerous technology by paying OpenAI a lot of money.
What a joke openAI has turned out to be. A cliche. Mad scientists building Frankenstein's monster over and over again. And then going "Whoops! Oh, well."