MITB Banner

How To Make Machine Learning More Human

Share

Last week, World Economic Forum released a paper on how to prevent discrimination of humans in machine learning? In which it provides a framework for developers to understand the potential risks associated with machine learning applications and how to combat marginalisation and discrimination of humans in AI and encourage dignity assurance. And focus on how companies designing and implementing machine learning technology can maximize its potential benefits. It also offers a set of transferable, guiding principles for the field of machine learning.

Tech giants like Google, Microsoft and Deepmind (Alphabet) have begun to explore the ideas of fairness, inclusion, accountability and transparency in machine learning. However, with AI continuing to influence more people in employment, education, healthcare etc and mostly in the absence of adequate government regulation – whether due to technology outpacing regulatory mechanisms, lack of government capacity, political turmoil – there is a need to more active self-governance by private companies, highlights the WEF paper.

Why AI Discriminates

While algorithmic decision-making aids have been used for decades but machine learning is posing new challenges and amplifying discrimination and marginalisation due to its complexity, opaqueness, ubiquity and exclusiveness.

There are some challenges related to the data used by machine learning systems. The large datasets needed to train ML systems are expensive either to collect or purchase due to which many companies, public and civil society bodies gets excluded from the machine learning market. Data may even be biased or error-ridden for classes of individual living in rural areas of low-income countries, or those who have opted out of sharing their data.

The paper highlights that even if the machine learning algorithms are trained on good data sets, their design or deployment could encode discrimination in ways like choosing the wrong model; building a model with inadvertently discriminatory features; absence of human oversight and involvement; unpredictable and inscrutable systems; or due to unchecked and intentional discrimination.

Hiring algorithms, for instance, are inadvertently reinforcing discrimination in the hiring process by preventing people with disabilities from getting a job. Google Photos app mistakenly categorised black people as gorillas. Google’s Art and Culture App lacks a variety of Asian arts. And reportedly in the US, predictive policing is amplifying the racial bias.

Concerns Around Machine Learning

The WEF paper points out that though machine learning algorithms are built at good data sets, their design and deployment could encode discrimination in ways like – choosing the wrong model or data, building model with inadvertently discriminatory features, an absence of human oversight and involvement and inscrutable systems or unchecked and intentional discrimination.

There are cases where bias is intentionally built into AI and machine learning algorithms and that will fundamentally affect people’s lives. If precautions are not taken now then they will have long-lasting consequences. For example, if employers want to avoid hiring women who are likely to become pregnant, they might employ machine learning systems to identify and filter out this subset of women.

If we look at China, the country is creating a model to score its citizens by analyzing a wide range of data from banking, tax, professional and performance records to smartphones, e-commerce, and social media. The authorities are hard at work devising their own e-database to rate each and every citizen by 2020. It, however, leaves an open question what if it means if governments act on scores computed using data that is incomplete, historically biased and using models not built for fairness.

On other hand, China is also using facial recognition technology for surveillance and public safety efforts. These raise a critical question on China – Is China’s big data surveillance protecting people from being falsely convicted on the basis of facial-recognition technology?

Two multinational insurance companies operating in Mexico are using machine learning to maximize their efficiency and profitability, with potential implications for the human right to fair access to adequate healthcare. Now imagine a scenario in which insurance companies use machine learning to mine data such as shopping history to recognize patterns associated with high-risk customers, and charge them more, if that becomes the case then the poorest and sickest person will not be able to afford access to health services.

Principles To Combat Bias

Though governments and international organizations have a major role to play, given the complex nature of machine learning and a rapid pace of technical development, most governments are not able to develop legal and regulatory frameworks to protect human rights in the deployment of new technologies. However, there are regulators who are getting ahead of the widespread deployment of new technologies. For instance, Germany has introduced laws on self-driving vehicles.

This is where companies need to come into play. The white paper claims that companies need to integrate principles of non-discrimination and empathy into their human rights due diligence – a process by which companies take ongoing, proactive, and reactive steps to ensure that they do not cause or contribute to human rights abuses.

World Economic Forum proposes 4 central principles to combat bias in machine learning and upload human rights and dignity on its white paper:

Active Inclusion

Machine learning application must involve a diversity of input, especially of the norms and values of specific populations affected by the output of AI systems. Individuals must give consent before the AI systems can use protected or sensitive variables or his or her personal data to make decisions.

Fairness

Fairness and the dignity of the people affected should be prioritized in the architecture of the machine learning system and its evaluation metrics.

Right of Understanding

If machine learning systems are involved in a decision-making process that affects individual rights, this must be disclosed by the company or developers. The systems must be able to provide an explanation of their decision making that is understandable to end users and reviewable by a competent human authority.

Access to Redress

The developers of the machine learning systems are responsible for the use and actions of their systems. It is necessary that they must make visible avenues for redress for those affected by disparate impacts, and establish processes for the timely redress of any discriminatory outputs.

How Companies Can Make Machines More Human

As artificial intelligence and machine learning becoming more advanced day by day, it includes less human supervision and less transparency. However, it is important that humans are kept in a loop where factors are being unexpectedly overlooked.

The paper cites an example – University of Pittsburgh Medical Center used machine learning to predict which pneumonia patients were at low risk of developing complications and could be sent home. In this case, machine learning model recommended that doctors send patients home who have asthma, having seen in the data that very few developed complications. However, doctors knew this was only because they routinely placed such patients in intensive care as a precaution.

It is impossible to define in advance when discrimination may happen in any given context, thus companies should keep human in the loop to identify and amend any bias in the system. It should also include fairness criteria and participate in open source data and algorithm sharing. With the help of internal codes and incentive models, companies must enhance governance for adherence to human rights guidelines.

“We encourage companies working with machine learning to prioritize non-discrimination along with accuracy and efficiency to comply with human rights standards and uphold the social contract,” said Erica Kochi, Co-Chair of the Global Future Council for Human Rights and Co-Founder of UNICEF Innovation said in a statement.

If machine learning system is involved in decision-making that affects individual rights, they must disclose and provide an explanation of their decision-making process that is understandable to users and reviewable by the authority. Because people need to know when machine learning has been used to make a decision that impacts them. Transparency would also include explaining the process of identifying human right risks and steps taken to prevent and mitigate them. If a company fails to improve transparency they can explain the design and working of their machine learning applications in technical papers. Transparency is required so that people know when machine learning systems have been used to make a decision that impacts them. To sum up, tech companies should take into account risks inherent in machine learning systems and bring in more transparency and take effective action to prevent and mitigate the risks associated with machine learning systems.

Share
Picture of Smita Sinha

Smita Sinha

I have over three-years of experience in editing, reporting. My career in journalism began with The Economic Times. When I am not busy, I read, I binge-watch web series.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.