Advertisement

Is Deep Learning Going To Be Illegal In Europe?

In a matter of months, General Data Protection Regulation (GDPR) will become a law throughout Europe, deeming a complete overhaul in the way artificial intelligence techniques are used in business settings. By May 25, the GDPR will become fully enforceable throughout the European Union, states the EU GDPR timeline.

The coming deadline, which will be enforced in the next 100 days, has sparked a debate among the AI research community and tech giants who are now scrambling to meet the EU’s data privacy and algorithmic fairness guidelines. Well, for the EU citizens, GDPR has strengthened their rights by ushering in a new era by unifying data protection rules and placing new obligations on tech enterprises on the process of collecting personal user data.

The forthcoming regulations have firmly divided Europe into two different camps – a) one that welcomes the need for data privacy and algorithmic fairness in society, b) tech giants who are bristling at the thought of new challenges, such as asking for user consent in simpler terms and tackling the black box problem of AI, which would make eventually make it illegal, with fines imposed to the tune of 4 percent of global turnover, reportedly.

GDPR Highlights in a Nutshell

First up, let’s shine a spotlight on some of the highlights of GDPR:

Regulation on collecting data from EU citizens by companies, big & small: This rule isn’t just limited to companies which are headquartered in EU but extends to all organizations that have data from EU citizens. From rethinking the size of text in terms and conditions to explaining how a company uses personal data to sell adverts, the GDPR enforces companies to follow Privacy by Design principles.

Data Portability: The regulation states that the subject can demand his/her personal data to be transferred directly to a new provider, without hindrance, that too in a machine-readable format. This is akin to switching mobile provider or switching social networks without losing any data. For companies like Google, Facebook which are veritable data mines and even smaller data science start-ups, this sounds like a death knell and a mass exodus of data when users leave a company.

Right to be forgotten/Right to erasure: As emphasized in Article 17 of GDPR, every data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to do that. Again, this will be a huge loss for tech behemoths who collect data in the form of cookies and reap gains from running targeted ads.

Algorithmic Fairness: The Right to Explanation of Automated Decision mandates that the data subject has a right to get an explanation about decisions made by algorithms and a right to opt-out of some algorithmic decisions altogether if they are not satisfied with it. For example, if an applicant is refused a loan based on an automated decision, they have a right to seek an explanation. For tech companies, it is deemed as a severely harmful restriction on artificial intelligence and will exponentially slow the development of AI technology, known for its universal accuracy.  

High Performance vs Poor Explainability Conundrum of AI

We are not going to delve into the nitty-gritty of EU guidelines but are going to shine the spotlight on the biggest criticism of the most widely used techniques of Artificial Intelligence – Deep Learning and its un-interpretability problem or let’s say the black box problem. This will make it virtually impossible for any company to do AI and even make it illegal. AI experts and tech companies who profit from data are crying hoarse about the unfeasibility of explaining algorithmic decisions because of the architecture of artificial neural networks which makes it hard to decipher how the output was generated.  

Well-known academician Dr. PK Viswanathan, Program Director at Great Lakes Institute of Management took a shot at demystifying the black box problem of artificial neural networks at Cypher 2017. According to Dr. Viswanathan, the wide perception is that Neural Network is a black box, but it is not completely a black box and there are a few ways the output can be explained.

Citing an example, he said that the word artificial is important and it is a strong contender in the world for its prediction and classification problems. Unlike logistic regression and other supervised techniques such as random forest, which are statistically oriented, Neural Network is non-parametric, non-linear complex relationship model-building exercise which is considered universal approximation. Neural Networks are used in all classification problem and prediction problems. Some of the common applications of neural networks are — buyer vs non buyer in marketing and classification of risk using neural network.

Topology of Artificial Neural Network

For example, let’s talk about a Multilayer Perceptron with Two Hidden Layers which is a class of feed-forward artificial neural network. Multilayer Perceptron with Two Hidden Layers is well-known for its predictive accuracy. In Multilayer Perceptron with Two Hidden Layers, you have two input neurons, two hidden layers and four nodes and then the output comes. Initially, one starts by giving some weights, bias, numbers and the activation function which could be a sigmoid function — a logistic function and you try to change the weights with a feed forward method. The feeds are changed every time in a recursive way and finally you get the output.

Now, in the architecture, the hidden layer is closely associated with the black box point –and how the neural network is learning the training set. This is where the major criticism steps in.

And every now and then a weight is changed, one applies a rule that minimizes some of the squares of error, and the iteration continue. This is known as the black box conundrum, it can approximate any function but it gives no insight into the relation between predicted variables and the outcome, explains Dr. PK Viswanathan. Now, in a supervised learning problem, one can explain the precise relationship between y and x but it is not possible to capture in a Neural Network.  

The biggest criticism of Neural Network is:

  1. It lacks explanatory power and we can’t define what is happening inside the hidden layers. But Neural Networks score high on universal approximation and accuracy.  
  2. Very difficult in the practical world to interpret the synoptic weights which is not the case in traditional techniques which further adds to the black box puzzle

Solving AI’s Black Box Conundrum

Now, many researchers are already working on explaining how neural networks make decisions. Let’s cite a couple of ways:

LIME: Better known as LIME, Local Interpretable Model-Agnostic Explanations, this technique involves manipulating data variables in many ways to see what moves score the most.  In LIME, local refers to local fidelity – i.e., the explanation should reflect the behaviour of the classifier “around” the instance being predicted. This explanation is useless unless it is interpretable – that is, unless a human can make sense of it. Lime is able to explain any model without needing to ‘peak’ into it, so it is model-agnostic. You can see the research paper here.

DARPA’s Explainable AI:  Now, DARPA has created a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance. Dubbed as Explainable AI (XAI), it enables human users to understand and manage the upcoming AI partners. The main advantage of Explainable AI is that new techniques can potentially circumvent the need for an extra layer. Another explanation component could from training neural network to associate semantic attributes with hidden layer nodes – which could boost learning of explainable features.

 

Download our Mobile App

Richa Bhatia
Richa Bhatia is a seasoned journalist with six-years experience in reportage and news coverage and has had stints at Times of India and The Indian Express. She is an avid reader, mum to a feisty two-year-old and loves writing about the next-gen technology that is shaping our world.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Bangalore

Future Ready | Lead the AI Era Summit

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

20th June | Bangalore

Women in Data Science (WiDS) by Intuit India

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

 5 Android Apps That Were Caught Spying

In this day and age, privacy seems to be about as mythical as unicorns. We can’t help but wonder, how many more popular applications are out there, pulling the wool over our unsuspecting eyes.