AI model from Maastricht University Claims to Detect COVID-19 in People’s Voices

The researchers used a voice analysis technique called Mel-spectrogram analysis, which identifies different voice features such as loudness, variation, and power over time.
Listen to this story

Artificial intelligence (AI) can now be used to detect COVID-19 infection in people’s voices through a mobile phone app, claim researchers from Maastricht University, The Netherlands. The research will be presented on September 5, 2022 at the European Respiratory Society International Congress in Barcelona, Spain.

The AI model used in this research claims to be more accurate than the rapid antigen tests as well as quicker and easier to use. The model aims to assist in the detection of infection in low-income countries, where PCR tests are often expensive and difficult to distribute. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Researcher Wafaa Aljbawi from the Institute of Data Science, Maastricht University said that the AI model was accurate 89% of the time—with the accuracy of lateral flow tests widely dependent on the brand.

Aljbawi further added that, “These promising results suggest that simple voice recordings and fine-tuned AI algorithms can potentially achieve high precision in determining which patients have COVID-19 infection. Such tests can be provided at no cost and are simple to interpret. They could be used, for example, at the entry points for large gatherings, enabling rapid screening of the population.”

The team obtained data from the crowd-sourcing COVID-19 Sounds App by the University of Cambridge. It contains 893 audio samples from 4,352 healthy and unhealthy participants—with 308 of whom had tested positive for COVID-19. 

The app would be installed on the user’s mobile phone where the participants would report information about demographics and medical history, along with recording respiratory sounds. Users would be asked to cough three times, breathe deeply through their mouth three to five times, and read a short sentence on the screen three times.

A voice analysis technique called ‘Mel-spectrogram’ analysis was used by the researchers to identify different voice features such as loudness, variation, and power over time.

More Great AIM Stories

Bhuvana Kamath
I am fascinated by technology and AI’s implementation in today’s dynamic world. Being a technophile, I am keen on exploring the ever-evolving trends around applied science and innovation.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM