Now Reading
What Regulatory Challenges Holding Back The Adoption Of AI In Healthcare In 2020

What Regulatory Challenges Holding Back The Adoption Of AI In Healthcare In 2020

What-Regulatory-Challenges-Holding-Back-AI-Adoption-In-Healthcare-In-2020-scaled
W3Schools

In today’s world, where AI impacts almost every aspect of our lives, how can healthcare be left behind? Every other day, we come across a media report of a novel new approach of AI being used in healthcare and life sciences. Some of the critical areas being drug research and discovery, gene editing, disease management and treatment, pattern recognition and early diagnosis. Early detection of life-threatening diseases such as cancer, cardiovascular diseases, and neurological disorders is one of the most prominent applications of AI in healthcare.

AI is also being used for efficient hospital management by removing bottlenecks and reducing wait time for patients as well as streamlining the workflow of healthcare professionals; thereby reducing their stress. However, even though the impact of AI in healthcare is so prominent and wide-spread – it is still not impactful enough. There are multiple things holding back AI in healthcare; one of the prominent challenges being the regulations.

Regulatory Shackles

One of the key regulatory issues that are hampering the acceptance of AI in healthcare is the archaic regulatory infrastructure. Although the technological advancements in the field of healthcare have grown by leaps and bounds, the regulatory infrastructure has failed to keep up.



According to Dr Latha Poonamallee, Co-Founder & Chairperson of In-Med Prognostics, “So far, the regulations covering software as medical devices also cover AI-based software too. But AI poses different and peculiar challenges than others.”

“For example, AI-based software learns from being used more and becomes more intelligent. Most regulatory approval is based on repeatability, but when a software learns on its own, its outputs may and will vary. While that is the strength of an AI system, regulation has to change with it,” told Analytics India Magazine. 

Traditional healthcare regulations cannot be applied for AI-based treatment as it is different from a drug or a vaccine. We are dealing with machine learning, and due to its “learning” capabilities, the algorithm keeps evolving. Consider this, by the time a regulatory approval for an algorithm is granted, the algorithm “learns” from more added data and thereby evolves, and becomes a different algorithm altogether.

“Another aspect is that AI, especially neural nets, is a black box; while we can program it, we don’t really know how it works inside. That poses the problem of explicability. More than the regulatory challenge holding back AI, regulatory authorities are trying to keep pace,” added Poonamallee.

Dr Manjiri Bakre, Founder and CEO, at OncoStem Diagnostics also agrees that the “black box” nature of AI poses a unique set of regulatory issues. She said, “Though some artificial intelligence tools operate within transparency, and with easily understandable methods, others are uninterpretable, and thereby black boxes. With no explanation and understanding of how output has been reached, it makes it difficult for regulatory agencies to examine applications involving a black box AI/ML solution.”

She believes that the ability of continuous learning systems to change their output over time in response to new data is another regulatory challenge. “Black boxes as well as AI algorithms that constantly self-update, present safety concerns that have yet to be addressed by any regulatory framework,” she added.

Another area of complication when it comes to healthcare regulatory compliance and AI is managing health data and privacy. According to Dr Anurag Agrawal, “The biggest regulatory challenge is in managing data availability in a way that is lean, balanced for benefits between the data-sources and data-users, and assures privacy and prevention of misuse.” 

Dr Agrawal is a principal scientist at the CSIR Institute of Genomics & Integrative Biology (IGIB), and also a member of AITF (Artificial Intelligence Task Force) set up by the Ministry of Commerce and Industry to kick-start the use of AI for India economic transformation.

U.S. Senator Amy Klobuchar (D-MN) once said in a statement, “New technologies have made it easier for people to monitor their health, but health tracking apps, wearable technology devices like Fitbits, and home DNA testing kits have also given companies access to your private health data with very few rules of the road in place regulating how it is collected and used.”

How To Regulate AI in Healthcare

Talking about the steps that can be taken by the regulatory authorities and the AI-centric health-tech companies to broaden the horizon of the regulatory environment and smoothen the adoption of AI into healthcare, Dr Poonamallee said, “Standardisation needs to keep up with changing technological trends. US FDA is aligning with ISO standards to be globally aligned. Because in AI, the output cannot be predicted, the quality of input data may have to be more closely regulated.”

Dr Bakre also believes that the standardisation of AI/ML in healthcare applications is imperative to ensure its successful and regulated use in the future. “Not only are algorithm-based digital health tools growing exponentially, but they also require vastly different regulatory tests and analyses. As a result, regulatory agencies will need to equip employees with the expertise required to assess machine learning and other advanced technologies, while developers will need to be informed of any new or evolving regulations,” she said.

She further added, “Demonstration of conformity of AI/ML solutions with regulatory frameworks requires new standards to be developed that align with and support existing regulatory frameworks while keeping pace with evolving technologies.”

Another key issue is to treat AI in healthcare differently and more seriously. Dr Poonamallee strongly believes that healthcare cannot be unregulated like a consumer market. Unlike for e-commerce portals, where AI is just used to suggest similar products based on consumer interest and doesn’t have serious implications. The application of AI in healthcare is a serious business.

See Also
AI Facebook Groups

She said, “AI also has the power to put actionable data in the hands of the patient. So along with regulation for clinical decision support, there needs to be consensus on patient decision support as well.”

Talking about the steps that should be taken by health tech companies and healthcare regulatory authorities to ensure that private health data stays private and is not misused, Dr Geetha Manjunath, Cofounder and CEO of Niramai told Analytics India Magazine, “It is important for health tech companies to maintain the integrity of patient data and protect the confidentiality of health information. Patient data should also not be misused against the patient.”

“For example, if the data is used to gain insights about the patients, and then those insights are used against them in increasing insurance premium and such. One way of protecting the user from such threats is to completely anonymise the data and only allow the health tech companies to use abstract information to gain insights about the community, or use it for research and build better predictive models which will further benefit all patients,” added Manjunath.

Geetha Further said, “I think regulatory authorities should also insist on companies to document the data operating procedures in their Quality Management Systems and ensure implementation of the same through regular inspections and audits. CE mark, ISO 13485 and GDPR requirements provide such broad guidelines, which I think all health tech companies need to follow strictly.”

Dr Agrawal believes that we need to wait for the report of the Lancet-Financial Times commission on Governing Health Futures 2030 for the regulatory guidelines for the application of AI in healthcare. Governing health futures 2030: Growing up in a digital world is a joint commission formed as a result of a partnership between The Lancet and Financial Times.

According to the release, this commission will explore the convergence of digital health, artificial intelligence, and other frontier technologies with universal health coverage, focusing on the health of children and young people.

“Equitable opportunity for accessing and using health data, while respecting individual privacy, is central to providing effective and universal health coverage at a global scale. The guiding principle of such data democratisation should be – of the people, by the people, for the people,” said Dr Agrawal in a statement. He is also one of the co-chairs of this commission.

Regulating the usage of AI in healthcare is a tricky affair; wherein half-baked approaches can have serious repercussions. We need more independent, non-partisan think tanks to come together and refurbish the governance model to keep up with the digitalisation and automation of healthcare.

What Do You Think?

If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top