Blockchain

AI brain implant enables bilingual communication in stroke survivors



AI brain implant enables bilingual communication in stroke survivors


In a breakthrough, scientists have succeeded in enabling stroke survivors to communicate in Spanish and English using neuroprosthetic implants. As reported in the NVIDIA Technology Blog, this development, detailed by the lab of Dr. Edward Chang at the University of California, San Francisco, represents a significant leap forward in medical technology.

Research Highlights

Research published in Department of Natural BiotechnologyIt builds on Dr. Chang’s initial 2021 research that demonstrated the ability to translate brain activity into words for individuals with severe paralysis. A recent study focused on a patient named Pancho who lost the ability to speak after a stroke. The neuroprosthesis uses a bilingual AI model to decode Pancho’s brain activity, translate it into Spanish and English words, and then display them on a computer screen.

technical implementation

To achieve this, the researchers trained a large-scale neural network model of Pancho’s brain activity using the NVIDIA cuDNN-accelerated PyTorch framework and NVIDIA V100 GPUs. A neuroprosthesis implanted on the surface of Pancho’s brain differentiates brain activity for Spanish and English communication. This differentiation is important to accurately translate your thoughts into your desired language.

During his studies, Pancho attempted to read and express words in both languages. Scientists then recorded his brain activity and trained an AI model to translate this activity into corresponding words. Amazingly, the AI ​​model achieved 75% accuracy in deciphering Poncho’s sentence.

Implications and future prospects

This research holds the potential to significantly improve communication methods for individuals who cannot speak or rely on alternative communication devices. The longevity of Pancho’s neuroprosthesis, implanted four years ago, highlights the potential long-term impact of this technology.

One of the study’s key findings is its implications for understanding how the brain manages verbal communication. Contrary to previous neuroscience research showing that different languages ​​are processed in separate brain regions, this study indicates that speech production in different languages ​​may originate from the same brain region. These insights could pave the way for more advanced neuroprosthetic devices that can support bilingual individuals.

Additionally, the study highlights the adaptability of generative AI models, which can learn and improve over time and play a critical role in translating brain activity into speech. Alexander Silva, the study’s lead author, expressed optimism about the future of this technology, noting the profound impact it could have on patients like Pancho.

For those interested in digging deeper into the study, the full research paper is available at: Department of Natural Biotechnology. Additional information about Dr. Chang’s previous research on converting brain waves into words can be found on the NVIDIA Technology Blog.

Image source: Shutterstock



Related Articles

Back to top button