MIT Releases New Framework For Machines To Work As Radiologist

In order to improve machine learning algorithms' interpretive abilities, scientists explore underused radiology reports that accompany medical images.

Advertisement

MIT Computer Science & Artificial Intelligence Lab (CSAIL), United States, employs an underused resource to help machine learning algorithms better analyse medical images; radiology reports are included with the images. 

According to MIT News, accurately evaluating an X-ray or a medical image is critical to a patient’s health and may even save a life. Due to the fact that obtaining such an examination is contingent on the availability of a trained radiologist, a speedy reaction is not always possible.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Ruizhi Ray Liao, a postdoctoral researcher at MIT’s CSAIL, said, “Our goal is to teach machines capable of recreating what radiologists perform on a daily basis.”

While the concept of using computers to interpret images is not new, the MIT-led team is utilising a previously underutilised resource — the vast body of radiology reports that accompany medical images and are written by radiologists in routine clinical practice — to enhance the interpretive capabilities of machine learning algorithms. Additionally, the team is leveraging a notion from information theory called mutual information — a statistical measure of the interdependence of two distinct variables — to bolster their approach’s success.

The following is how it works: 

  • To begin, a neural network is taught to detect the extent of a disease, such as pulmonary oedema, by presenting it with a large number of X-ray pictures of patients’ lungs, as well as a doctor’s severity rating for each instance. 
  • That information is contained within a series of numbers. Text is represented by a distinct neural network, which uses a different set of integers to represent its information. 
  • The information from images and text is then integrated by a third neural network in a coordinated approach that maximises the mutual information between the two datasets.

Polina Golland, a principal investigator at CSAIL, stated that “When the reciprocal information between images and text is high, images are highly predictive of the text, and the text is highly predictive of the images.”

The work was supported by the National Institutes of Health’s National Institute of Biomedical Imaging and Bioengineering, Wistron, the MIT-IBM Watson AI Lab, the MIT Deshpande Center for Technological Innovation, the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (J-Clinic), and the MIT Lincoln Lab.

More Great AIM Stories

Dr. Nivash Jeevanandam
Nivash holds a doctorate in information technology and has been a research associate at a university and a development engineer in the IT industry. Data science and machine learning excite him.

Our Upcoming Events

Conference, in-person (Bangalore)
MachineCon 2022
24th Jun

Conference, Virtual
Deep Learning DevCon 2022
30th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MORE FROM AIM