Now Reading
A Treatise On Data Leakages In Trained Machine Learning Models

A Treatise On Data Leakages In Trained Machine Learning Models


Deep learning models can be fed with all kinds of data—personal, sensitive and more. As defined by GDPR, personal data relates to an identified or identifiable natural person. In contrast, sensitive data is personal data revealing racial or ethnic origin, biometric data processed solely to identify a human being; health-related data; genetic data, or data concerning a person’s gender. As per India Personal Data Protection Act(PDPA), which is yet to be passed, “genetic data” is the personal data relating to a natural person’s inherited or acquired genetic characteristics. This data consists of unique information about the behavioural traits, physiology or the health of that natural person and other biological aspects. The specificity of these definitions and demand for strict adherence makes it challenging for AI researchers to develop models and methods that don’t leak sensitive data in the wild.

Such data requires more robust safeguards for data processing, storage and transfer. The objective here is to mitigate against the risk of leaking any sensitive personal data, which can be traced back to a real individual’s identity. According to researchers from the University of Edinburgh, the real-world risk of linkage back to identity is complex. It depends on the frequency of data points, the size of the source dataset, the availability of public datasets to support general reidentification strategies, and public domain information that makes it easier to identify specific individuals.

Register for FREE Workshop on Data Engineering>>

Types of Data Leakages

(Source: Paper by Jegorova et al.,)

In the work by Jegorova et al., the authors have surveyed the ML landscape for potential avenues for data leakage:

1|Based on the type of data

Text data 

This data includes individuals’ names (users, clients, patients, security personnel, etc.), dates of birth, postcodes, phone numbers, unique IDs, etc. In the context of training ML models on such data, a leak of specific sensitive data entries, features or complete data records when deployed is possible.

Leakage in image data

Given how the generative models have improved, data leakage in images can have disastrous outcomes. Think of a bug in the model that allows hackers to lock you out of your iPhone or even your house that uses smart locks. Data leakage includes faces of people or other identifying features. When training an ML model with such sensitive image data, wrote the authors, a generative model can dish out a lookalike based on re-identifiable bone/denture implants and other features unique to an individual.

Leakage in tabular data 

In tabular data, stated the survey, datasets are constrained to predefined variables and values. This hikes the risk of identifying an individual with greater accuracy based on

  • statistical disclosure risks, 
  • governing features such as the sensitivity of the tabular data,
  • geography and population size, 
  • zero-value entries, and
  • small group linkage to specific clinical providers. 

2| Based on the type of tasks

Regression

In fields such as financial forecasting, marketing trends, and weather predictions, where regression techniques are widely implemented, many previous works have talked about model level leakage for different sorts of data, including financial and medical time-series, numerical tabular data, and mixed feature tabular data.

Classification

According to the authors, image classification is the most well-researched task in terms of leakage and privacy attacks. Researchers have already demonstrated that data samples might be reconstructed from as little information as a class label by using membership inference attacks (MIAs), property inference attacks, and model extraction. However, within a classification, the applications with tabular data are the least explored and even less when it comes to classifiers in time-series problems.

Generation

See Also

Over the past couple of years, generative models have majorly contributed to the AI hype. Algorithms were able to generate paintings that were auctioned for millions of dollars. But, at the same time, these models unbottled a genie in the form of deep fakes that can fool an ordinary viewer and accommodate a nucleus of misinformation of catastrophic proportions. A well-trained generative adversarial network (GAN) can capture the underlying distribution of the real data, which explains the effectiveness of deep fakes! But, according to the authors, even sampling these models can give away sensitive information of the individuals from the training set.

3| Miscellaneous

The memorisation of specific training data samples occurs when the model assigns some samples a significantly higher likelihood than expected by random chance. However, when it comes to deep learning or deep reinforcement learning, some degree of memorisation is always appreciated and may be unavoidable. That makes it even more challenging as the line between a feature and a bug is slim. The authors warn that memorisation can lead to serious privacy and legal concerns about publicly sharing trained ML models or providing them as a service.

Feature leakage is characterised by leakage of sensitive features of the data. Feature leakage implicitly enables property inference attacks, which can be a threat to collaborative learning models. 

The leakages mentioned above are exploited by membership inference attacks, property inference attacks, model inversion attacks and model extraction attacks. These attacks are the most popular ones and are being researched actively. According to the survey, these attacks can be thwarted with the following defence mechanisms. 

For instance, Data Obfuscation is a method of perturbing sensitive information through either scrambling or masking. In this method, noise is deliberately added to the data. It simulates the trade-off between user privacy and service quality, influenced by the severity of the data perturbation. 

Whereas, Data Sanitisation overwrites the sensitive information within the data with realistic-looking synthetic data, using techniques like flipping labels. These defences allow researchers to anticipate how a model behaves when attacked.

That said, it is not advisable to resort to brute force defences in the ML systems that are doing the heavy lifting. It varies with use cases. For example, data leakage in a healthcare setup is more dire than in a recommender system storing favourite movies. And, as we discussed earlier, bugs like memorisation can also be great features sometimes. Furthermore, most popular defences are case-specific, and they are yet to be challenged at scale. Privacy-preserving ML applications are already a thing thanks to on-device federated learning and other such techniques. But, with every new paradigm of ML models, a new challenge arises. The generative models outclassed classification models in creating problems. While one figures out a way for memorisation problems, a “catastrophic forgetting” problem gets discovered. With governments tightening rules through GDPR, PDPA and their equivalents, the research into data leakages and their defences has never been more critical.

What Do You Think?

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top