Create an artificial intelligence system that converts ideas into audible and understandable words
Create an artificial intelligence system that converts ideas into audible and understandable words


Researchers from the University of California have developed an advanced artificial computer system that records brain signals during their speech and converts them into understandable words. The electrodes attached to the cerebral cortex were used to convert brain waves into words spoken by a computer. In this regard, this is an amazing and distinctive development that you can count on to help people who cannot speak in the near future.

When we talk, the brain sends signals from the motor cortex to the chin muscles, lips, and throat to coordinate its movements and make sounds.

"The brain converts ideas and what you mean by vocal cords into muscle movements, and that's what we're trying to break," said Edward Chang of the University of California, San Francisco.

Decode the ideas with the electrodes

He and his colleagues developed a two-step process to decipher these ideas by connecting a surgical electrode group to a part of the brain controlling movement and using a computer (computer) to simulate the function of the human voice pathway, to reproduce an audible sound.

In their studies, they worked with five participants who applied part of epilepsy treatment to the surface of the motor cortex. These people were asked to read 101 phrases, including words and phrases covering all sounds in English, while the team recorded signals from the motor cortex while speaking.
Speech production involves more than 100 muscles

There are more than 100 muscles used to generate speech and are controlled by several groups of neurons operating simultaneously in very complex mechanisms. So it is not easy to assign signals from an electrode to a muscle to explain the order of the brain in the mouth. Building on previous training, the team developed an algorithm that reproduces the sound of spoken words from a series of signals sent to the lips, chin, and tongue.

The team said that "good performance" can be achieved with only 25 minutes of speech training equipment and that the decoder function has improved further.

Create audio files from bookmarks

After creating the audio file of the semaphore, the team asked hundreds of English speakers to listen to the sentences generated by computer systems and learn the words understood.

The listener will record 43% of the experience completely if 25 words are available, and 21% of the experience will be recorded if 50 options are available. More training and references to artificial neural networks are also provided, and these results are gradually improving.

In this way, trained future algorithms can decode another patient's words without extensive training.

When the team asks someone to imitate the word, move their mouths without making any noise. The system is not working, he says.
Based on control signals only

The main advantage of this system compared to other previous systems is that it only relies on control signals from the motor region of the brain, which always send signals even when the brain is paralyzed. Therefore, this device can help people who have been able to speak before, but people who have lost this ability due to surgery or movement disorders cannot speak. In such accidents, people lose control of their muscles.





Save 80.0% on select products from RUWQ with promo code 80YVSNZJ, through 10/29 while supplies last.

HP 2023 15'' HD IPS Laptop, Windows 11, Intel Pentium 4-Core Processor Up to 2.70GHz, 8GB RAM, 128GB SSD, HDMI, Super-Fast 6th Gen WiFi, Dale Red (Renewed)
Previous Post Next Post