MITB Banner

This new AI model can tell where the sound came from

The research was funded by the National Science Foundation and the National Institute on Deafness and Other Communication Disorders.

Share

MIT neuroscientists have developed a computer model that can localize sounds. The model packs a bunch of convolutional neural networks and performs the task as well as humans do.

The human brain is tuned to recognise particular sounds and determine the direction of their origin. The brain estimates the location of the sound by comparing differences in sounds that reach the right and left ear. “We now have a model that can actually localize sounds in the real world,” said Josh McDermott, an associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “And when we treated the model like a human experimental participant and simulated this large set of experiments that people had tested humans on in the past, what we found over and over again is that the model recapitulates the results that you see in humans.”

McDermott is the senior author of the paper, which appeared in Nature Human Behavior. The paper’s lead author is MIT graduate student Andrew Francl. “The study also found that humans’ ability to perceive location is adapted to the specific challenges of the environment,” added McDermott.

Convolutional neural networks are also used extensively to model the human visual system.

Since convolutional neural networks can be designed with different architectures, the MIT team first used a supercomputer to train and test about 1,500 different models to help them find the ones that would work best for localization. The researchers narrowed it down to 10 models and further trained and used them for subsequent studies.

To train the models, the researchers created a virtual world where they controlled the size of the room and the reflection properties of the walls. They used over 400 training sounds that included human voices, animal sounds, machine sounds and natural sounds. The researchers also ensured the model started with the same information provided by human ears, which includes details like the sound reflecting and altering to the outer ear which has folds. The researchers simulated this effect by running each sound through a specialised mathematical function.

To test this, the researchers placed a mannequin with microphones in its ears in an actual room and played sounds from different directions and then fed those recordings into the models. The models performed very similarly to humans when asked to localize these sounds. “Although the model was trained in a virtual world, when we evaluated it, it could localize sounds in the real world,” Francl said.

The researchers are applying the model to other aspects of audition, like pitch perception and speech recognition to understand other cognitive phenomena, such as the limits on what a person can pay attention to or remember. The research was funded by the National Science Foundation and the National Institute on Deafness and Other Communication Disorders.

Share
Picture of Meeta Ramnani

Meeta Ramnani

Meeta’s interest lies in finding out real practical applications of technology. At AIM, she writes stories that question the new inventions and the need to develop them. She believes that technology has and will continue to change the world very fast and that it is no more ‘cool’ to be ‘old-school’. If people don’t update themselves with the technology, they will surely be left behind.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.