Home Artificial Intelligence OpenAI Voice Cloning Too Good To Release

OpenAI Voice Cloning Too Good To Release

OpenAI Voice Cloning Too Good To Release – Notwithstanding the company’s tendency to suggest its products are more revolutionary than the French, OpenAI’s announcement that Voice Engine is...

0
OpenAI Voice Cloning Too Good To Release.

OpenAI Voice Cloning Too Good To Release Without Stronger Rules.

OpenAI Voice Cloning Too Good To Release – Notwithstanding the company’s tendency to suggest its products are more revolutionary than the French, OpenAI’s announcement that Voice Engine is too good at voice cloning to release is a signpost for security people.

In the past a known person’s voice over any comms path might be considered legitimate but that’s going to change – and soon. OpenAI said the other day that results of testing show that its Voice Engine is so good at deepfake voice cloning it will certainly be misused so the company will hold release back until stronger rules and guidelines can be established for deployment.

Voice Engine is an update Open AI’s text-to-speech API and the conversation mode of ChatGPT and that uses text input and a single 15-second audio sample to generate natural-sounding speech that closely resembles the original speaker. It is notable that a small model with a single 15-second sample can create emotive and realistic voices”, according to OpenAI.

The company, which said it trained Voice Engine using freely available data said it was “taking a cautious and informed approach to a broader release due to the potential for synthetic voice misuse”.

“We hope to start a dialogue on the responsible deployment of synthetic voices, and how society can adapt to these new capabilities,” OpenAI said.

“Based on these conversations and the results of these small-scale tests, we will make a more informed decision about whether and how to deploy this technology at scale.”

Current terms and conditions for companies testing Voice Engine prohibit the impersonation of an individual or organization “without consent or legal right.”

They mandate clear disclosure of the use of AI to clone voices, and informed consent from anyone whose voice is being cloned. Plus, Open AI uses watermarks to make it easier to identify audio produced using Voice Engine.

The development of freely available voice clones is going open an entirely new realm of threat for security teams – not just operationally but within their own command structures.

You can find out more about Voice Engine here or read more SEN news here.  

“OpenAI Voice Cloning Too Good To Release Without Stronger Rules.”

OpenAI Voice Cloning Too Good To Release
OpenAI Voice Cloning Too Good To Release Without Stronger Rules.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version