MITB Banner

EU Fundamental Rights Agency Issues Report On AI Ethical Considerations

EU Fundamental Rights Agency Issues Report On AI Ethical Considerations

Design by EU Fundamental Rights Agency Issues Report On AI Ethical Considerations

The European Union’s Fundamental Rights Agency (FRA) has recently published a report on AI that probes into the ethical considerations that must be undertaken to develop the technology.

A document published under the title ‘Getting The Future Right’, interviewed over a hundred public administration officials and private company staff, in an effort to answer the question.

With the evidence of algorithmic bias already surfacing in multiple AI applications, it is important to address their infringement on fundamental rights if we are to unlock its full potential.

Even though the report’s findings and recommendations were made with the EU in mind, they provide a good starting point for everyone. This article presents the study’s findings regarding how AI can infringe on one’s fundamental rights and the significance of FRA’s guidelines to help address the issue.

Fundamental Rights Infringement By AI

As AI is increasingly used across sectors, it directly or indirectly affects human beings daily. This has resulted in the infringement of multiple human or fundamental rights. 

Experts believe AI-driven processing needs to be carried out in a manner that respects human dignity. However, the use of AI for criminal activities or weapons can risk people from their very fundamental right — the right to life. Apart from such extreme cases, subjecting people to AI without their consent can also infringe on their privacy or personal data.

Non-discrimination and equality, along with fundamental rights, are a crucial topic when it comes to the discussion of AI, mainly because it is AI’s job is to categorise, classify, or separate. Decisions should be taken by AI; however, without considering protected attributes like gender or religion. This is because personal attributes are often strongly correlated with protected attributes. As a result, vulnerable groups are often at the receiving end of the resulting discrimination of algorithmic biases.

AI can also impact access to justice. However, the ADM systems are very opaque and accessing justice, while not knowing how or where you have been wronged, can make it very tricky. Apart from that, it is difficult for people to know where to seek justice in such a case.

Even though access to social protection systems is guaranteed under the law, it is becoming increasingly evident that biased algorithms can negatively impact those who rely upon them.

Lastly, consumer protection and the right to good administration can also get impacted through careless use of data.

Overall, the development and deployment of technology without assessing the impact on fundamental human rights can result in significant negative consequences. Hence, ethical guidelines will play a significant role in avoiding them.

Ethical Guidelines To Avoid AI’s Infringement

In order to avoid a negative impact on one’s fundamental rights, the EU has called for ethical guidelines and considerations for AI.

The FRA calls for the inclusion of all the fundamental rights, beyond data privacy and data protection, while developing AI. This is important because the resulting applications can lead to discrimination or impede justice. Hence effective safeguards against such incidents are important.

To understand how such incidents could occur in the first place, the FRA calls for a flexible impact assessment of algorithms before they are deployed by private or public organisations. This should not be hard-coded into a computer system, given that fundamental right violations are always contextual. This will help ensure that all aspects of fundamental rights are covered in all sectors.

It is also important for people to have access to legal guidance if AI fails them. People need more transparency on how the data is used and how to complain of grievances. 

Grievance redressal systems for AI can only be effective if there is more clarity on data protection rules. Hence, the report calls for more clarity on the implications of data and the resulting ADM systems. 

At the same time, it is important to grow awareness about the potential ways AI can discriminate to avoid it in the first place. This calls for more research funding on the topic. These considerations are essentially important to effectively regulate AI and its aspects.

Finally, the document points out the need for an oversight system that can hold businesses and governments accountable for their use of AI. 

Wrapping Up

The report clearly illustrates how AI systems can impact the fundamental rights. However, each of these systems’ complexities is different, and there is not yet enough clarity on how to avoid negative consequences from them.

Hence, it is important to effectively and adequately protect the fundamental rights of human beings from AI using all the resources available.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Kashyap Raibagi

Kashyap Raibagi

Kashyap currently works as a Tech Journalist at Analytics India Magazine (AIM). Reach out at kashyap.raibagi@analyticsindiamag.com

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories