Now Reading
Pitfalls Of AI In Healthcare — The Holy Grail Of Personalised Medicine

Pitfalls Of AI In Healthcare — The Holy Grail Of Personalised Medicine

Richa Bhatia
Image: GE Healthcare

If AI is the engine of growth, then the healthcare AI market is definitely getting geared up for the ultimate boom. As AI technology paves the way for smarter healthcare systems and workflow improvements in India, it also brings a host of issues such as data integrity, algorithmic accountability and bias in data. The most prominent revenue churners in AI applications are virtual assistants, optimisation systems for administrative workflows and robot-assisted surgery.

An Accenture report calls AI the new OS in health. But behind the promise and potential of AI in healthcare, lies the risk of patient safety. Much has already been written about the growing volumes of healthcare data, which in part has spurred the AI revolution in medicine. For example, an IDG report claims healthcare industry is facing a “data tsunami” which has grown from 153 exabytes in 2013 to 2,314 exabytes by 2020. Behind the stupendous growth lies the threat of shadow IT and compliance issues.

Exploring the medical risks posed by AI should be as much part of the study as the applications for adoption in healthcare. A study estimates that currently, 50 percent of more than 3.4 billion smartphone and tablet users have downloaded mobile health apps. To counter the risks, policymakers, hospitals and universities will have to come together to formulate guidelines and policies on best practices on data collection, data usage, how to integrate new technologies and the potential risks associated with clinical and non-clinical data. Here’s why — in a clinical setting, patient data necessitate the highest level of accuracy, reliability, security and privacy a better place to take advantages of the AI.



Risks of AI in healthcare

Algorithmic Accountability In Medical Data: Dubbed as the “black box of medicine”, big data techniques are only as potent as the data fed into the models. Given the opaque nature of AI technology, a doctor or a patient will not be able to understand how a prediction was made or whether the algorithmic conclusion is correct. Even though the algorithms deployed are validated before, an incorrect output can affect patient outcomes. Today, AI is leveraged from deciding the choice of drugs to personalised treatments, in view of the scarce hospital resources and low doctor-patient ratio. A study hints that while algorithms trained on reams of data points can churn predictive correlations, there is no way to assess whether the outcome was biased, accurate or inaccurate. That’s why the medical community and AI tech providers need to validate the algorithms on a diverse dataset in order to deliver a fair outcome.

Risks Of Curated Datasets: Given the complexity of medical data and its disparate sources, putting together curated datasets is an expensive and time-consuming task. Researchers require dataset that have the breadth and depth for training and this can spark privacy concerns.

Data Privacy And Black Box Medicine: How can one ensure algorithmic accountability without sacrificing data privacy? This feverish debate has drawn opinions from across the board — healthcare community, tech experts and even ethicists who worry about the risks posed by leakages related to clinical data and clinical trial data. Accountability, transparency and privacy do not go hand-in-hand — ethicists claim to limit the use of clinical, non-clinical data researchers companies use to validate models, while researchers claim that to ensure transparency, one requires a diverse dataset for driving a fair patient outcome. That is why researchers have proposed robust data governance policies to keep out unintended users and ensure patient data is used ethically.

See Also

CIOs Grapple With Data Governance Policies In Healthcare: As AI and cloud technology permeates the healthcare industry, CIOs are increasingly investing in off-the-shelf cloud-based solutions which can pose data security threats. Besides cybersecurity governance, CIOs are also grappling with compliance and data ownership. Then, there’s also the issue of shadow IT, a growing security threat to healthcare organisations that needs to be addressed. Another reason why it is acuter is because a breach of data can prove too costly for organisations that have to face financial penalties and loss of patients’ trust. A report suggests that every breached health record can cost $355.7 to healthcare organisations that not only lose data but also user trust.

Outlook

It is true there won’t be any precision medicine without AI and the role of technology is only set to grow. According to the Accenture report, consumers are more likely to see AI having a positive impact. The technology will also deliver more benefits in terms of efficiency and greater interoperability. It will also lower the cost of care, combat physician shortage and reduce the burden of clinical demand.

Provide your comments below

comments


If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top