Now Reading
How Is Ethical AI Different From Fair AI


How Is Ethical AI Different From Fair AI

Shraddha Goled
Ethical AI

One of the biggest challenges that AI systems face is in regard to its ethics and fairness in its operations. The best ways to demonstrate this would be the example of the secret AI tool that was used for recruitment purposes in e-commerce giant Amazon in 2014. Only a year later, the organisation realised that the AI system was partial towards male candidates since it was trained to vet applications by observing patterns in resumes submitted to the company over ten years; most of these applications were from men. Case of missed opportunity.

In order to understand these challenges, it is first necessary to differentiate between two aspects — ethics and fairness. Ethical AI and fair AI are often used interchangeably, but there are few differences.

Ethical AI vs Fair AI

The concept of machine morality, especially in the case of AI, has been explored by computer scientists since the late 1970s. These research are mainly aimed at addressing the ethical concerns that people may have about the design and application of AI systems. To formally define, at the core of ethical AI, the idea is that it should never lead to rash actions, the result of poor learning, that could impact human safety and dignity.

Following are some of the main and strong suites of an Ethical AI system, which are accepted and prescribed prominent field players such as Microsoft.

  • Technical robustness, reliability, and safety: It is important to build robust AI systems that are resilient to adversarial attacks. Such attacks manipulate the behaviour of the system by making changes to the input or training data. In worse case scenarios, these attacks can prove fatal to the environment they are in. Additionally, an ethical AI system should also be able to fallback from a ‘rule-based system’ and ask for human intervention to prevent it from going rogue.
  • Privacy and security: An ethical AI system must guarantee privacy and data protection throughout its lifecycle, which includes the information provided initially by the user and that generated during the course of interaction with the system. This is quite a slippery slope. Since these systems primarily rely on data, they are always hungry for new information. There have been multiple reports of tech giants, intentionally or otherwise, tapping into users’ sensitive information.
  • Transparency: The guidelines from the European Commission released in 2019, defined AI transparency in three subparts: traceability, explainability, and communication. Vendors must make the decision-making capabilities of the AI device transparent to the users to protect against any possible harm against humans or their rights.
  • Fairness and inclusivity: Bias is one of the major problems with AI systems. These systems internalise the choice of the researchers, building them and further amplify them. Experts believe that to build a system completely devoid of such bias is impossible. However, there are a few steps that could be taken to minimise them, including using inclusive datasets to train these machines on.
  • Accountability: An ethical AI has mechanisms that ensure responsibility and accountability, not just during its creation but also after development, deployment and use. Companies must adhere to rules and regulations to make sure that their systems conform to ethical principles.

Having seen what an ethical AI system means, it is easy to infer that fairness is prominent yet just a part of it. So when we speak of a fair AI, we refer to an attribute of ethical AI in the larger sense. To define, fair AI refers to probabilistic decision support that prevents it from unfairly harming or benefiting a particular. There are multiple reasons as to why ‘unfairness’ creeps into a system: the data the system learns from, the way algorithms are designed, and modelling by way of selecting relevant features as inputs and combining them in meaningful ways.

See Also
Modern Computing Alliance

As mentioned earlier in the article, it is impossible to construct a 100% universally fair or unbiased system. Partly because there are up to 20 different mathematical definitions of fairness, however, organisations can design AI systems to meet specific goals, thus mitigating the unfairness and creating a more responsible system overall.

Wrapping Up

There is a very fine line between ethical AI and a fair AI. It becomes difficult to differentiate, as they also overlap at a few points. Companies need to realise the difference between the two to develop a system that best suits their operation and creates an overall responsible AI system.

What Do You Think?

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join Our Telegram Group. Be part of an engaging online community. Join Here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top