Voice Cloning: Where WOW Meets OMG

your dont say workshop logo

Don’t Become a Victim of Voice Cloning

Have you had this experience? You hear about a remarkable innovation, but before you can finish the phrase “That’s amaz . . . .” you’ve already jumped ahead to the questions and concerns it raises.

It’s also the subject of You Don’t Say: An FTC Workshop on Voice Cloning Technologies, scheduled for January 28, 2020. You’ll want to check out the just-announced agenda.

Think of the benefits of voice cloning for people who have lost the ability to speak.

But now consider the danger if scammers exploit the technologies by using recognizable voices to perpetrate family emergency scams (“I’m in the hospital, Grandpa, and need money ASAP”), business imposter cons (“Wire a payment to our vendor immediately”), or other forms of fraud.

That’s how many people respond to voice cloning – emerging technologies that let users make near-perfect reproductions of a person’s voice.

What is Voice Cloning

Voice cloning is the process of creating a synthetic voice that sounds like a specific person or target voice. This is done by using machine learning algorithms to analyze and replicate the unique characteristics of a person’s voice, such as their tone, cadence, and accent.

One common approach to voice cloning involves training a neural network on a large dataset of audio recordings of the target voice. The network then uses this data to generate new audio that mimics the target voice.

Another method involves using a text-to-speech (TTS) engine to generate synthetic audio from a transcript of the target voice’s speech. In both cases, the resulting audio can be used to create virtual assistants, voice-overs, or even deepfake audio for nefarious purposes.

Voice cloning technology has many potential applications, including helping people with speech disabilities to communicate using synthetic voices that sound like their own.

However, it also raises ethical concerns, as it can be used to create fake audio that can be used to spread misinformation or impersonate individuals.

Dangers of Voice Cloning

Voice cloning technology has advanced significantly in recent years, and while it has many potential benefits, it also poses significant risks. One of the most significant dangers of voice cloning is the potential for misuse, particularly in the form of deepfake audio.

With the ability to create synthetic voices that sound like real people, it is now possible to create convincing fake audio that can be used to spread disinformation, impersonate individuals, or commit fraud.

This poses a serious threat to individuals and organizations, particularly those in positions of power or influence, as they may be targeted by malicious actors looking to exploit this technology for their own gain. As such, it is essential to consider the risks of voice cloning and take steps to mitigate these dangers.

Can Voice Cloning Be Prevented?

Preventing voice cloning completely is challenging as it is difficult to control the distribution of voice data in today’s digital age. However, there are steps that individuals and organizations can take to mitigate the risks of voice cloning.

One approach is to limit the amount of publicly available voice data by being careful about what you share online. For instance, avoiding posting long audio recordings of your voice on social media platforms.

Additionally, using two-factor authentication and other security measures for important accounts like email and banking can reduce the risk of impersonation through voice cloning.

Technology-based solutions such as watermarking, steganography, and digital signatures can also be used to verify the authenticity of audio recordings and detect deepfakes.

These techniques add hidden markers to the original audio data, which can be used to identify if the audio has been tampered with.

In addition to technical measures, public awareness campaigns and educational programs can help people recognize the dangers of voice cloning and learn to identify deepfakes.

These efforts can encourage critical thinking and promote a healthy skepticism towards unverified audio recordings.

While it may not be possible to prevent voice cloning entirely, taking proactive steps can help to minimize the risks associated with this technology.

FTC on Voice Cloning

You Don’t Say will convene at 12:30 PM ET with remarks from FTC Commissioner Chopra.

Next on the agenda: a presentation by Dr. Patrick Traynor, the John and Mary Lou Dasburg Preeminence Chair in Engineering at the University of Florida, on the state of voice cloning technologies.

The first panel – which will feature a demonstration of voice cloning – will focus on Good and Bad Use Cases.

On the second panel, academics and others will discuss the Ethics of Voice Cloning.

The third panel will explore Authentication, Detection, and Mitigation. Lois Greisman, Associate Director of the FTC’s Division of Marketing Practices, will present closing remarks at 4:45.

You Don’t Say is free and open to the public. Planning to attend in person?

The event will convene at 12:30 PM ET on Tuesday, January 28th, at the FTC’s Constitution Center conference facility, located at 400 7th Street, S.W., Washington, DC.

Or you can watch the live webcast from a link we’ll post minutes before the start time.

Check out tweet from @FTC using the hashtag #voicecloningFTC.

View the Original Source Article HERE

Scroll to Top