OpenAI holds back wide release of voice-cloning tech due to misuse concerns

Voice Engine can clone voices with 15 seconds of audio, but OpenAI is warning of potential misuse.

https://arstechnica.com/information-technology/2024/03/openai-holds-back-wide-release-of-voice-cloning-tech-due-to-misuse-concerns/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

OpenAI holds back wide release of voice-cloning tech due to misuse concerns

Voice Engine can clone voices with 15 seconds of audio, but OpenAI is warning of potential harms.

Ars Technica

@arstechnica
I remember OpenAI holding back GPT from public release in a very similar fashion: it’s too powerful, concerns over misuse, etc. Then they went ahead and released it to the public anyway, with all the potential abuses barely mitigated and now materializing exactly as predicted.

At this point, I can’t view this sort of “It’s too powerful to be public!!” statement as anything but prerelease marketing hype.

@inthehands @arstechnica They are weirdos. Voice cloning already worked pretty darn well with Open Source tools about a year ago (tortoise tts)... Didn't try the never models, but... Idk what OpenAI could do to be even more dangerous.
The only thing missing was a fool proof UI.