Google AI Helps Doctors Decide When to Trust AI Diagnoses

In a joint effort, Google DeepMind and Google Research publish a paper that promises safer medical imaging interpretations, reducing false positives by 25%.
AI has changed the face of many sectors, but in healthcare its adoption is moving at snail’s pace. The sector faces several challenges such as data privacy, security, lack of interoperability, and the absence of regulation, which have restricted its adoption.  AI models are prone to mistakes, and as we know it is human to err. Google Research has asked the question, what would be the error rates when you combine the expertise of predictive AI models and clinicians? In July this year, Google DeepMind joined hands with Google Research and introduced the Complimentary-driven-Deferral-to-Clinical-Workflow (CoDoC), a system that maximises accuracy by combining human expertise with predictive AI. The system essentially decides if the AI model is more accurate than a hypothetical clinician’s workflow of diagnosis. It does this using a confidence score of the predictive model as one of the inputs.  The comprehensive tests of the CoDoC with multiple real-world datasets ha
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of K L Krithika
K L Krithika
K L Krithika is a tech journalist at AIM. Apart from writing tech news, she enjoys reading sci-fi and pondering the impossible technologies, trying not to confuse it with reality.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed