Now Reading
WHO Lays Down 6 Principles To Use AI In Healthcare

WHO Lays Down 6 Principles To Use AI In Healthcare

  • WHO issued its first worldwide report outlining the design and usage of AI in health care.

The World Health Organization (WHO) recently issued its first global report on artificial intelligence in healthcare and six guiding principles for its design and use. The report titled ‘Ethics and governance of artificial intelligence for health’ has been published after two years of consultations held by a panel of international experts appointed by WHO. 

According to the new WHO guidelines, AI promises to improve healthcare and medicine delivery, but only when ethics and human rights are put at the heart of its design, deployment and use. 

Register for our upcoming Masterclass>>

Tedros Adhanom Ghebreyesus, Director-General at WHO, said, “Like all new technology, AI holds enormous potential for improving the health of millions of people around the world, but like all technology, it can also be misused and cause harm…This important new report provides a valuable guide for countries on how to maximise the benefits of AI while minimising its risks and avoiding its pitfalls.”

Role of AI in Healthcare

AI is used in some developed countries to increase the speed and accuracy of medical diagnosis and screening for diseases and assist with clinical care. Additionally, AI can enhance health research, drug development and support public health interventions- disease surveillance, outbreak response, and health system management. 

AI can further enable patients to control their health care better and monitor their evolving needs. In addition, it will bridge the gap between patients and healthcare resources in regions with limited access to healthcare services.

However, WHO cautions against making any significant claims on the benefits of AI for human health, especially when these claims come at the expense of other critical priorities, including achieving universal health care.

The six guiding principles

WHO has issued the following guidelines to maximise the opportunities and mitigate the hazards of using AI in healthcare:

See Also

  • Protecting human autonomy: Control of healthcare systems and medical decisions should remain in control of humans; privacy and confidentiality of patients must be maintained; patients should give valid consent through a legal framework for data protection. 
  • Promoting human well-being and safety and the public interest: Designers of AI systems should ensure that technologies are built in a way that they fulfil regulatory standards for safety, accuracy, and efficacy for clearly defined and predetermined purposes. The metrics for quality control and AI-based practices must be in place.
  • Ensuring transparency, explainability and intelligibility:  The effectiveness of a new AI technology relies heavily on the extent to which it is disclosed or documented before use. Such information should be publicly accessible to facilitate consultation and debate on how the technology is designed and used. 
  • Fostering responsibility and accountability: While AI technologies are designed to perform specific tasks, stakeholders are obligated to take measures to guarantee they are employed correctly and by appropriately qualified individuals. Mechanisms should be made available for questioning and redressing individuals and groups affected by decisions based on algorithms.
  • Ensuring inclusiveness and equity: AI for health should promote inclusiveness and encourage fair use and access across the board– regardless of age, sex, gender, income, race, ethnicity, sexual orientation, ability, or other characteristics protected under human rights codes.
  • Promoting AI that is responsive and sustainable: Everyone — designers, developers, and end-users — should monitor how AI performs during actual use to ensure it fulfils expectations and requirements. Additionally, AI systems should be built to minimise environmental impact as much as possible while increasing energy efficiency. Workplace disruptions, including training for healthcare employees to adjust to the use of AI systems and potential job losses due to the usage of automated systems, require attention from governments and companies alike.         

Risk of AI in healthcare

The report further offers excellent advice on capitalising on AI without the associated risks and problems. 

It points out that opportunities are linked to challenges and risks. These include: 

  • Unethical collection and use of health data
  • Biases encoded in algorithms
  • Risks of AI to patient safety, cybersecurity, and the environment.      

Also, the report stresses that data collected from systems trained for persons from high-income countries only provide subpar results for those from lower and middle-income countries. Therefore, AI systems should reflect healthcare and socio-economic situations. Furthermore, they should be accompanied with training in digital skills, community participation, and awareness-raising to ensure healthcare employees retain their jobs.  

What Do You Think?

Join Our Discord Server. Be part of an engaging online community. Join Here.


Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top