A team at UC Davis has created a brain-computer interface (BCI) capable of converting neural signals into near instantaneous speech, offering hope for people who have lost the ability to speak due to neurodegenerative diseases like ALS. The system, called a brain-to-voice neuroprosthesis, uses 256 microelectrode arrays implanted in brain regions controlling speech muscles, particularly the ventral precentral gyrus. By recording neural activity when the patient attempted to read sentences, researchers trained a deep-learning AI to synthesize his intended speech.
To restore his original voice, they also used a voice-cloning model trained on recordings made before his condition. The BCI could reproduce not only words but also vocal intonation, allowing the patient to ask questions, emphasize specific words, and even sing short melodies. Speech was generated with only a 25-millisecond delay, enabling near-instant communication. In tests, listeners could understand about 56% of the BCI-generated speech, compared to 3% without it.
While the technology remains experimental and not yet ready for everyday use, it demonstrates the potential for restoring speech to people with paralysis or other speech impairments. Future improvements may include more electrodes, advanced AI models, and exploration of use in fully locked-in patients or those with language disorders.
