Now Reading
Now AI Gives Paralysed Man Ability To Communicate In This Groundbreaking Research

Now AI Gives Paralysed Man Ability To Communicate In This Groundbreaking Research

  • In the US alone, 2.8 million people incur traumatic brain injury a year.
Now AI Gives Paralysed Man Ability To Communicate In This Groundbreaking Research

The University of California, San Francisco (UCSF) researchers have leveraged artificial intelligence to give a paralysed man the ability to communicate by translating his brain signals into computer-generated writing. The study was published in The New England Journal of Medicine.

People with speech impediments use touchscreens, keyboards or speech-generating devices to communicate. But assistive technologies are not much of a help for paralysed people.

Register for our upcoming Masterclass>>

In the US alone, 2.8 million people incur traumatic brain injury a year. Such injuries may result in hearing loss and vestibular and central auditory problems. 

A ray of hope 

The Joan and Sanford Weill Chair of neurological surgery and senior author, Edward Chang and his team worked with a brainstem stroke survivor who had lost the ability to speak in a car accident in 2003. 

The researchers were not sure if his brain retained neural activity linked to speech. To track his brain signals, a neuroprosthetic device consisting of electrodes was positioned on the left side of the brain, across several regions known for speech processing. 

A question was displayed for the participant, and the device recorded brain activity while he attempted to speak and reply. Meanwhile, a computer algorithm translated the brain activity patterns into words and sentences in real-time. “To our knowledge, this is the ‘first successful demonstration of direct decoding’ of full words from the brain activity of someone who is paralysed and cannot speak,” said Chang. 

GPUs to the rescue 

David Moses, one of the lead authors of the study and a postdoctoral engineer in the Chang lab, in a press release, said, “Our models needed to learn the mapping between ‘complex brain’ activity patterns and ‘intended speech.'”

Decoding the responses from the subject’s brain activity, the team created speech-detection and word classification models. Leveraging the cuDNN-accelerated TensorFlow framework and 32 NVIDIA V100 Tensor Core GPUs, the researchers trained, fine-tuned, and evaluated the models. 

The study co-lead Sean Metzger said utilising neural networks was essential to getting the classification and detection performance, and the final product was the results of lots of experimentation. “The ‘GPUs’ helped us make changes, monitor progress, and understand our dataset,” he added.

A twist in the tale  

The UCSF team embarked on 50 training sessions, generating over 1,000 words decoded with up to 93 percent accuracy, and a median rate of 75 percent, at the rate of 18 words per minute. The latest study was built on previous work by Chang and his team, where they had developed a deep learning method for decoding and converting brain signals. Compared to the latest work, the participants in the previous study were able to speak. 

See Also

Earlier, Facebook pulled the plug on invasive brain-computer interface (BCI) research. Facebook Reality Labs (FRL) was established in 2017, and the BCI project harboured an ambitious long-term goal: to develop a silent, non-invasive speech interface activated by thinking. Facebook’s Project Steno was kick-started in 2019 in the Chang Lab at UCSF.

Facebook clarified it has no interest in developing products that require implanted electrodes. Instead, Facebook is looking to focus on wrist wear or wearable technology to scan brain signals. “We have decided to focus our immediate efforts on a different neural interface approach that has a nearer-term path to market: wrist-based devices powered by electromyography,” said Facebook. 

Wrapping up

The new approach holds the potential to advance existing methods of assisted communication, improving the ability to communicate and improving quality of life in paralysed patients with speech disorders. 

Stanford University Professor Maneesh Agrawala is working on a device to give people their voices back after undergoing laryngectomy surgery “We plan to record a patient’s voice ‘before the surgery’ and then use that ‘pre-surgery recording’ to convert their electrolarynx voice back into their pre-surgery voice,” said Agrawala. 

What Do You Think?

Join Our Discord Server. Be part of an engaging online community. Join Here.


Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top