Advantages and risks of artificial intelligence voice mimicry
Advantages and risks of artificial intelligence voice mimicry

Artificial intelligence technology offers many products and services. Of course, relying on these techniques to imitate sounds is fun and interesting, but it is certainly very dangerous.

These technologies are now able to reproduce and create sounds with the utmost accuracy and conviction. All users need is to provide audio files to anyone who owns an AI model. Thus, artificial intelligence can simulate a person's voice and speech.

Once the AI ​​learning process is complete, the user experience becomes easier. AI will be able to read any text in a very real and convincing voice. Thus, anyone can be fooled by this technique.

These techniques have developed a lot recently and are described by the term "speech synthesis". Neural networks are now able to fully simulate sounds rather than simply dividing words into letters and reconnecting them.

What is special and scary about text-to-speech technology is that it is not limited to any particular company or platform. Rather, it is a technique that anyone can use. If you search Google for terms like “Text-to-Speech AI” or “Deepfakes AI Voice,” you will see many results for websites that offer this service. For example, it looks like. AI, Respeecher etc.

Imitate sounds with artificial intelligence

Although these technologies are still in their infancy, they have attracted the attention and distrust of millions of users around the world. Especially after being used on real projects.

The beginning of this situation can be made clear in a soon-to-be-produced documentary. A documentary about Chef Anthony Bourdain was shown last July. After its release, the manufacturer announced that it was only relying on artificial intelligence to imitate Bourdain's voice.

This is because Bourdain sent the director a letter containing scripts. Instead of presenting it in the traditional way, they use artificial intelligence to imitate men's voices. Other useful applications have also emerged: When a startup announced the creation of a successful vocal version of actor Val Kilmer, his vocal cords were damaged by the treatment.

Beneficial and ethical uses of these techniques have been discussed in the previous examples, but there are undoubtedly other, more harmful uses such as mimicking one's voice to deceive another, and even using it in public frauds.

These technologies have entered another, more interesting phase. In the future, celebrities and actors can "play their voice" through paid online services. An example of this is Veritone, which launched a similar service that allows access and use of celebrity voices.

These technologies are used even in software and apps for casual users, such as the Descript podcast app, which allows content creators to make copies of their own voice and use it to create content.

Previous Post Next Post