Now Reading
What Happened When Google Tested Its AI In Real World

What Happened When Google Tested Its AI In Real World

Ram Sagar
W3Schools

Back in 2016, Google’s AI team have burdened themselves with tackling one of the fastest-growing illnesses of our time — diabetic eye diseases. Diabetic retinopathy (DR) — an eye condition, currently affects people with diabetes and is the fastest-growing cause of blindness, with nearly 415 million diabetic patients at risk worldwide.

Google’s researchers have developed a deep learning algorithm that can interpret signs of DR in retinal photographs, potentially helping doctors screen more patients, especially in communities where the resources are limited.

This deep learning algorithm showed great promise with results that are on par with ophthalmologists. 



After three years of thorough testing and tweaking the model, a team of researchers have decided to put their model into practice. For this, they have chosen Thailand, where there are only about 1,400 eye doctors for approximately five million diabetics.

How Did It Go

Google AI, in partnership with the Ministry of Public Health in Thailand, conducted field research in clinics across the provinces of Pathum Thani and Chiang Mai for over a period of eight months.

During this period, the researchers made regular visits to 11 clinics, observed how the nurses of those clinics handled eye screenings and interviewed them to have a deeper understanding of the process. In the course of their trials, they found significant fundamental issues in the way the deep learning systems were deployed. Though the model was improved regularly, the challenges came from factors external to the model.

For instance, some images captured in screening might have issues like blurs or dark areas. An AI system might conservatively call some of these images “ungradable” because the issues might obscure critical anatomical features that are required to provide a definitive result. For clinicians, the gradability of an image may vary depending on one’s own clinical set-up or experience.

Two images of the same eye, with varied lighting

The system’s high standards for image quality is at odds with the consistency and quality of images that the nurses were routinely capturing under the constraints of the clinic.

The nurses at the clinic took two images of the same eye in case of an ungradable image. However, according to the report, this caused discomfort in the patients and also added to the frustration of the nurses. So, they have explored solutions such as darkening the room to improve lighting conditions that will lead to higher-quality images.

Not only this, but even the speed of internet connection plays a major role in the time taken for each patient.

See Also
Outsourcing analytics

This proves that no matter how good a model is, challenges surface once they are deployed in the real-world. More so in something like healthcare set up.

Key Findings & Recommendations

In a recent report, the researchers have elaborated more on their research. The findings can be summarised as follows:

  • In the case of user-centred applications, product design should involve people who would interact with the technology. 
  • In the case of AI systems in healthcare, we must also factor in environmental differences like lighting, which vary among clinics and can impact the quality of images. Just as an experienced clinician might know how to account for these variables in order to assess it, AI systems also need to be trained to handle these situations.
  • Building an AI tool is a challenge, as any disagreements between the system and the clinician can lead to frustration. 
  • This study found that the AI system could empower nurses to confidently and immediately identify a positive screening, resulting in quicker referrals to ophthalmologists.

Future Direction

Although the researchers evaluated a deep learning system in the wild, they say that the study was focused on nurses and camera technicians. More research needs to be done to understand how the system affects patient experience and their trust in the results, and the likelihood to act on them. 

Google’s AI team also suggests that additional research is needed to understand how the system may alter the practices of ophthalmologists who evaluate patients based on the deep learning system. 

Lastly, as more systems are evaluated in clinical environments, an important area of future work includes the design of study protocols for conducting human-centered prospective studies and studies on end-to-end service design of AI-based clinical products. 

What Do You Think?

If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top