Listen to this story
If you are following the world of tech, you probably know the hottest thing right now is generative AI. For good or for worse, everyone is talking about it. And so it becomes important to examine whether the real-world applications of generative AI in industries such as healthcare live up to the hype, especially given concerns about the safety of data and accuracy of the models’ results.
Nitin Aggarwal, who heads Cloud AI services at Google, recently took to LinkedIn to opine that while it is easy to create a “wow” factor with generative AI, it is not so easy to integrate it to solve an end-to-end business problem. In consumer settings, it is easy to use these models and get the questions answered or generate an image, but enterprise is a completely different ball game altogether.
Aggarwal opines that one common question here is how mature enterprise GenAI is – there are questions hanging in the air around “where my prompts are stored, who owns the IP or the adapter model built on my data, whether a foundational model will be tuned on my data, and so on.” But, with AI getting democratised, data will become a very important asset. “The variety and quality of data you own is the biggest IP and competitive advantage you have,” said Aggarwal. And if enterprises don’t dwell over data governance and accountability, it will be easy to lose that differentiator.
In this regard, OpenAI also announced its plans to launch ChatGPT Business in the coming months. The feature is for enterprises looking to manage their end users, stressing that users’ data will not be used to train their model by default. “ChatGPT Business will follow our API’s data usage policies,” said the company.
A cautious approach
One example of a healthcare enterprise using GenAI would be India’s largest hospital chain Apollo. Apollo created an AI application at the end of last year, called Clinical Intelligence Engine (CIE), which uses probabilistic algorithms to determine clinical diagnosis and related information. This technology is touted to be much like ChatGPT in this regard. Trained on numerous medical histories and case studies extracted from Apollo’s proprietary clinical knowledge base as well as millions of anonymised, real-world clinical data from Apollo, CIE is an expert knowledge system with reasoning power and highly specialised, deep domain knowledge in the clinical area.
Generally, the approach has been fairly cautious, and implemented in those areas where the stakes are lower because they have less direct impact on patients. For instance, Syntegra, an AI healthcare startup, utilised generative AI to generate synthetic data. Janssen Pharmaceutical Cos’ data scientists validated the synthetic data against real data, making it especially valuable for researching less common diseases where acquiring sufficient patient data is challenging.
The synthetic data has been validated by Janssen’s data scientists against real data, and will be particularly useful for researching less common diseases, where it is harder to gather sufficient patient data
A hard push
Nevertheless, LLM providers are pushing for solutions. Microsoft Azure OpenAI Service’s integration to Epic’s EHR (Electronic Health Record) platform aims to automatically fill in missing information, suggest potential diagnoses, and predict future health outcomes based on historical data. Likewise, Google is looking to explore applications of MedPaLM-2 in ultrasound, radiotherapy, and other diagnostic and treatment planning processes.
We have Nvidia as well, who along with Segmed and RadImageNet, are working to develop models that can create high-quality synthetic images to expand the availability of training data. This will aid in the refinement of medical AI algorithms to improve the accuracy and consistency of medical diagnoses. Additionally, during GTC, the company also announced that it will integrate edge AI capabilities to Medtronic’s GI Genius, an AI-assisted colonoscopy tool to help physicians detect polyps that can lead to colorectal cancer.
Recently, a study also showed that novel technologies like natural language processing (NLP) and artificial intelligence (AI), such as ChatGPT, have the potential to produce high quality clinical letters that are easily understood by patients while improving efficiency, accuracy, patient satisfaction, as well as save cost to a health-care system.
Ghosts of IBM Watson
The crux is that hopes are high among all. But, given the challenges, the promises of any AI-focused healthcare startup should be taken with a grain of salt.
“IBM once boasted that Watson could one day find a cure for cancer. No published research has yet to show that Watson improved patient outcomes, and IBM has since abandoned all applications of Watson for healthcare,” reads a WSJ article.
The generative AI hype train might also meet the same fate. University of Pittsburgh Medical Center’s Dr Robert Bart told WSJ that future uses of generative AI, such as for disease diagnosis, are still very off. However, what it can do in the now is improve operational processes such as patient scheduling and flow.
“There are AI algorithms already certified by the US FDA and can be safely used in medicine, but in the case of generative AI, it will be several years before they can be trusted. But then we are in for a truly massive revolution in healthcare,” said Artur Olesch, Founder of aboutDigitalHealth.com.