Active Hackathon

What Happened When Google Tested Its AI In Real World

Back in 2016, Google’s AI team have burdened themselves with tackling one of the fastest-growing illnesses of our time — diabetic eye diseases. Diabetic retinopathy (DR) — an eye condition, currently affects people with diabetes and is the fastest-growing cause of blindness, with nearly 415 million diabetic patients at risk worldwide.

Google’s researchers have developed a deep learning algorithm that can interpret signs of DR in retinal photographs, potentially helping doctors screen more patients, especially in communities where the resources are limited.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

This deep learning algorithm showed great promise with results that are on par with ophthalmologists. 

After three years of thorough testing and tweaking the model, a team of researchers have decided to put their model into practice. For this, they have chosen Thailand, where there are only about 1,400 eye doctors for approximately five million diabetics.

How Did It Go

Google AI, in partnership with the Ministry of Public Health in Thailand, conducted field research in clinics across the provinces of Pathum Thani and Chiang Mai for over a period of eight months.

During this period, the researchers made regular visits to 11 clinics, observed how the nurses of those clinics handled eye screenings and interviewed them to have a deeper understanding of the process. In the course of their trials, they found significant fundamental issues in the way the deep learning systems were deployed. Though the model was improved regularly, the challenges came from factors external to the model.

For instance, some images captured in screening might have issues like blurs or dark areas. An AI system might conservatively call some of these images “ungradable” because the issues might obscure critical anatomical features that are required to provide a definitive result. For clinicians, the gradability of an image may vary depending on one’s own clinical set-up or experience.

Two images of the same eye, with varied lighting

The system’s high standards for image quality is at odds with the consistency and quality of images that the nurses were routinely capturing under the constraints of the clinic.

The nurses at the clinic took two images of the same eye in case of an ungradable image. However, according to the report, this caused discomfort in the patients and also added to the frustration of the nurses. So, they have explored solutions such as darkening the room to improve lighting conditions that will lead to higher-quality images.

Not only this, but even the speed of internet connection plays a major role in the time taken for each patient.

This proves that no matter how good a model is, challenges surface once they are deployed in the real-world. More so in something like healthcare set up.

Key Findings & Recommendations

In a recent report, the researchers have elaborated more on their research. The findings can be summarised as follows:

  • In the case of user-centred applications, product design should involve people who would interact with the technology. 
  • In the case of AI systems in healthcare, we must also factor in environmental differences like lighting, which vary among clinics and can impact the quality of images. Just as an experienced clinician might know how to account for these variables in order to assess it, AI systems also need to be trained to handle these situations.
  • Building an AI tool is a challenge, as any disagreements between the system and the clinician can lead to frustration. 
  • This study found that the AI system could empower nurses to confidently and immediately identify a positive screening, resulting in quicker referrals to ophthalmologists.

Future Direction

Although the researchers evaluated a deep learning system in the wild, they say that the study was focused on nurses and camera technicians. More research needs to be done to understand how the system affects patient experience and their trust in the results, and the likelihood to act on them. 

Google’s AI team also suggests that additional research is needed to understand how the system may alter the practices of ophthalmologists who evaluate patients based on the deep learning system. 

Lastly, as more systems are evaluated in clinical environments, an important area of future work includes the design of study protocols for conducting human-centered prospective studies and studies on end-to-end service design of AI-based clinical products. 

More Great AIM Stories

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

Our Upcoming Events

Conference, Virtual
Genpact Analytics Career Day
3rd Sep

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

The curious case of Google Cloud revenue

Porat had earlier said that Google Cloud was putting in money to make more money, but even with the bucket-loads of money that it was making, profitability was still elusive.

Global Parliaments can do much more with Artificial Intelligence

The world is using AI to enhance the performance of its policymakers. India, too, has launched its own machine learning system NeVA, which at the moment is not fully implemented across the nation. How can we learn and adopt from the advancement in the Parliaments around the world? 

Why IISc wins?

IISc was selected as the world’s top research university, trumping some of the top Ivy League colleges in the QS World University Rankings 2022

[class^="wpforms-"]
[class^="wpforms-"]