Europe & The Dream For Ethical AI

The proposed EU regulations will cover European citizens as well as companies operating in the area.
Europe & The Dream For Ethical AI
Image © Europe & The Dream For Ethical AI

Advertisement

‘Artificial Intelligence is a fantastic opportunity for Europe. And citizens deserve technologies they can trust’. -Ursula von der Leyen, President of the European Commission

Artificial intelligence has penetrated all walks of life, including healthcare, entertainment, policymaking, law enforcement etc. However, the technology has its share of downsides. In the face of a growing chorus of criticisms and fears over the outsized power of AI, the European Commission has proposed its first legal framework on artificial intelligence.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

The proposed regulations will cover EU citizens as well as companies operating in the area. As per the European Commission, the regulation aims to develop ‘human-centric, sustainable, secure, inclusive and trustworthy AI.’ 

If this proposal is adopted, it would see the EU taking a solid stance against certain AI applications—a radically different approach compared to the US and China. Many are calling the proposed reforms the General Data Protection Regulation (GDPR) for AI. 

How will this work?

The proposed regulation has a risk-based approach and classifies AI into four groups: Unacceptable risk, high risk, limited risk and minimal risk.

AI systems with unacceptable risk are those considered a clear threat to individual rights and safety and will be, as the name suggests, banned from use. This includes systems that manipulate behaviours and systems that allow ‘social scoring’ by governments—such as those used in China.

The European Commission has defined systems that require remote biometric identification, such as large-scale facial recognition programmes, systems known to carry biases, and systems intended to be used as a security component as high risk. According to Annex III of the European Commission’s proposal, areas considered high risk include AI systems used in critical infrastructure (such as transportation), educational training (e.g., AI to score tests), safety-components of practices (e.g., robot-assisted surgery), hiring processes, law enforcement, and migration and border control. These systems are subject to appropriate human oversight, traceability, appropriate risk assessment and detailed documentation providing information on the AI to users to ensure compliance. Exceptions can be made for instances like searching for a missing child or suspected terrorist activity, but only with authorisation from a judicial body and with limits in time and geographical reach.

Limited Risk AI systems, such as chatbots, should adhere to transparency obligations. In such cases, users need to be made aware that they are speaking to a machine. For example, if a DeepFake is used, it must be declared upfront that the image or video has been manipulated. Finally, minimal risk AI representing only minimal or no risk to individual safety will not be regulated. These include AI-enabled video games or spam filters. 

Does this make sense?

The EU’s concerns over AI are not entirely unfounded. Bias is a massive problem in current AI systems. In April 2021, a man in Detroit was wrongfully arrested for shoplifting in a facial recognition fiasco. A 2019 study by the National Institute of Standards and Technology (NIST) in the US revealed that facial recognition algorithms are far likelier to misidentify certain groups than others. To wit, Asian and African-American people were misidentified 100 times more than Caucasian people. 

Privacy is another major point of debate between advocates and critics of big data and artificial intelligence. In 2020, hackers leaked the facial recognition firm Clearview AI’s sensitive client list. The firm’s clientele, including the FBI, Interpol, the US Department of Justice and private businesses such as Macy’s and Best Buy, had access to over three billion photographs in Clearview AI’s database. The data breach revealed the shady activities of individual players looking up private citizens without appropriate oversight and stoked surveillance fears and privacy concerns.

Getting back to the matter at hand, Article 4 of the EU proposal talks about prohibiting certain uses of AI, labelled as unacceptable risk. However, as per Daniel Leufer, European policy analyst at Access Now, the descriptions of AI systems in this category are ‘vague and full of language that is unclear’ and has significant loopholes. For instance, there is a proposed ban on systems that can manipulate users to distort their behaviour in a manner that can cause them or someone else psychological or physical harm. However, determining what’s detrimental to an individual falls under the purview of the nation’s laws. 

Though remote biometric identification comes under the high-risk category, exceptions could be made for police surveillance, of course, with judicial authorisation. The framework has loopholes to end-run around regulations. If you read the fine print, the legal framework doesn’t exactly put the surveillance fears at rest.

As for reactions to the new regulation, the White House had asked the EU to avoid overregulating AI to prevent Western Innovation from getting upstaged by China. Others, however, have welcomed the regulation and believe it will help build trust in these systems. As per Peter van der Putten of Pegasystems, tech vendors and consumers alike will benefit from regulations and called the proposed legal framework a ‘good first step’. 

More Great AIM Stories

Mita Chaturvedi
I am an economics undergrad who loves drinking coffee and writing about technology and finance. I like to play the ukulele and watch old movies when I'm free.

Our Upcoming Events

Conference, in-person (Bangalore)
MachineCon 2022
24th Jun

Conference, Virtual
Deep Learning DevCon 2022
30th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MORE FROM AIM