Advertisement

5 Ways In Which AI Is Improving Accessibility For The Hearing Impaired

Image source: DeepMind

With the potential of AI permeating all aspects of our lives, the scope of the technology to help people with hearing disability has increased. Multiple wearable devices with artificial intelligence AI, ML and NLP embedded in it are available in the market, making the lives of people with hearing disability easier.

In this article, we look at some of the top use cases of AI technology helping hearing impaired:

Language translation and captioning:  Tech giants are already working in the field as part of its larger corporate social responsibility programme. Microsoft, as part of its inclusive mission, has developed headsets with its embedded AI-powered communication technology, Microsoft Translator for hearing impaired.  The system uses automatic speech recognition to convert raw spoken language – ums, stutters and all – into fluent, punctuated text. Furthermore, the service is available in more than 60 languages. To promote inclusiveness, it has also partnered with educational institutes to improve access to spoken language and sign language of deaf students. Microsoft is also believed to have committed $25 million to its AI for Accessibility programme

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Voice assistant for the deaf: Popular voice assistants like Amazon Echo and Apple Siri has been used by researchers to further development in the field by tweaking the systems slightly.

To provide more nuanced hearing experience, auditory assistants powered by AI and NLP have been developed by several companies. One of the leading hearing implant providers, Cochlear, has patented their exclusive AI-based assistant, FOX in 2017. The device uses speech perception and other patient outcome tests as an input to its fitting optimization algorithm, in order to maximise outcomes for patients.


Download our Mobile App



In addition, outcome test for the device is conducted by using the Auditory Speech Sounds Evaluation (ASSE) test suite which is directly linked from the clinician’s computer to the Cochlear speech processors using a proprietary link.

Closed Captioning Personalization: Several companies have used the capabilities of AI to facilitate this feature which will translate audio into text instantaneously. Recently Netherland-based startup introduced GnoSys, an app which can translate sign language into text and speech.  Known as the Google Translator for deaf and mute, the app leverages NLP and computer vision capabilities to detect videos of sign language and then translates into speech or text using smart algorithms. According to the company the app can be used in B2B setups which aims to employ deaf and mute employees.

Enhanced language prediction: The application of AI in processing brain imaging to better understand health conditions has become a new trend in the medical technology field. Researchers and medical practitioners are diversifying the applicability of AI in this field lately.

One such development has been the usage of AI to better understand the language prediction capabilities in deaf children. The researcher from the Chinese University of Hong Kong and Ann & Robert H Lurie Children’s Hospital of Chicago applied ML and AI to predict how deaf children can master languages after receiving cochlear implant surgery. The researchers used MRI scans to capture abnormal patterns before cochlear implant surgery and developed an ML algorithm for predicting language development.

Improve lip reading: One of the challenges that people with disability face is the lack of readily available disable friendly content over the net. By developing lip reading algorithms, Google’s DeepMind had developed an AI system that can generate a closed caption for its deaf users. To train the system, DeepMind’s algorithms watched more than 5000 hours of television and identified as many as 17,500 unique words. As a result of this intensive training, the system could outdo professional lip-readers by translating 46.8 per cent of words without errors. The researchers at Google believe that technology has great potential to improve hearing aids, silent dictation in public spaces and speech recognition in a noisy environment.

Such technology can vastly help the deaf community for easier interpretation of readily available visuals content and improve the accessibility of content for the community

More Great AIM Stories

Akshaya Asokan
Akshaya Asokan works as a Technology Journalist at Analytics India Magazine. She has previously worked with IDG Media and The New Indian Express. When not writing, she can be seen either reading or staring at a flower.

AIM Upcoming Events

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Early Bird Passes expire on 10th Feb

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
AIM TOP STORIES

Top BI tools for Mainframes

Without BI, organisations will not be able to dominate with data-driven decision-making but focus on experiences, intuition, and gut feelings.