Listen to this story
|
After OpenAI’s ChatGPT recently cleared all three parts of the USMLE in a single go, as per the results of a new experiment, San Francisco-based medical knowledge management platform, Glass Health recently launched Glass AI, an LLM-based tool capable of generating a diagnosis or clinical plan based on symptoms.
The tool is expected to aid clinicians in developing better diagnoses and clinical plans.
Founded in 2021, Glass Health was started by Dereck Paul and Graham Ramsey. It is a medical knowledge management platform that helps physicians learn medicine faster and leverage their knowledge to provide better patient care.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
Inside Glass AI
At present, Glass AI is an experimental feature that aims to help clinicians generate a differential (DDx) and draft a clinical plan. The tool is being developed for a clinical audience and is not a search tool for a general audience that some Twitter users perceived.
Download our Mobile App
In just two days of its beta launch, over 14k people used Glass AI to submit 25.7K queries. The co-founder, Paul, said that users found close to 84 percent of differential diagnosis (DDx) and 78 percent of clinical plan outputs as helpful. The post is based on feedback received from users. Paul adds that accuracy ratings are lower. For example, users rate 71 percent of DDx outputs and 68 percent of Clinical Plans as accurate.
He explains that even without perfect accuracy, some outputs from the tool are helpful by suggesting a DDx a clinician didn’t consider or drafting a plan a clinician can easily edit.
The tool sometimes returns output that needs to be more accurate and helpful. However, according to Paul, these limitations are expected so early in this exploration, especially with more complex entries.
Being aware of the potential for AI to perpetuate harmful biases or stigma from either user inputs or training data, Glass AI has deployed additional safeguards to protect against this. This includes biased human decisions or reflect historical or social inequities, even if sensitive variables, including gender, race, or sexual orientations.