Now Reading
Can ML Models Eliminate Bias From Datasets On Their Own

Can ML Models Eliminate Bias From Datasets On Their Own

Ram Sagar
  • A major challenge in representation learning for NLP is to produce models that are robust to dataset biases.

Data is generated by humans, curated by humans, and then there is this ambitious pursuit towards Artificial General Intelligence. AI researchers who have to train the models for sophistications on human level walk a tightrope of accuracy and bias

Typically, methods that are designed to avoid dataset approach explicit modeling. These approaches require isolating specific biases present in a dataset. However, they end up being expensive, time-consuming and error-prone. Experts believe that it is unrealistic to expect such analysis for all new datasets. Typically, biases are tackled in the preliminary phase of an ML pipeline by being cautious with data collection or curation. However, biases in models can emerge in the guise of assumed domain expertise, through the ignorance of the practitioners and thousand other things. 

There can always be issues associated with the information that is deployed into the model, such as:

  • an incorrect model gets pushed
  • incoming data is corrupted
  • incoming data changes and no longer resembles datasets used during training.

In a recent work published by a team of Hugging Face and Cornell researchers, the authors have explored the notion of how models with limited capacity primarily learn to exploit biases in the dataset. For the experiments, they have leveraged the errors of limited capacity models to train a more robust model. The objective was to eliminate the need to hand-craft a biased model. 



Learning From Other’s Mistakes

According to the authors, the assumption of knowledge of the underlying dataset bias is quite restrictive. Finding dataset biases in established datasets may require access to private details about the annotation procedure. The active reduction of surface correlations in the collection process of new datasets is challenging, given the potential biases. So, they used two models–weak and main or robust, where one model learns from the other’s mistakes.

Method overview:

  • A weak learner is trained with a standard cross-entropy loss (CE).
  • Main model is trained via a product of experts (PoE) to learn from the weak learner’s errors. 
  • The idea is to make a robust model to learn to make predictions that consider the weak learner’s mistakes.

According to Geoff Hinton, who introduced the concept of Products of Experts (PoE), it can produce much sharper distributions than the individual expert models. Each model can constrain different dimensions in a high-dimensional space, and their product will then constrain all dimensions. For instance, for NLP tasks, one expert can ensure that the tenses agree and that there is a number agreement between the subject and verb.

For the experiments, the authors used English datasets and followed the standard setup for BERT training. The primary model is BERT-based with 110M parameters. For mitigating dataset biases without hand-crafting it, the researchers used PoE so that the robust model makes predictions that compensated for the weak learner’s mistakes. The authors fine-tuned BERT sizes ranging from 4.4 to 41.4 million parameters and used them as weak models in a PoE setting. On running the experiments, the authors found that the main model’s out-of-distribution performance increases as the weak model become stronger–with more parameters– up to a certain point. Whereas, in-distribution performances drop slightly at first and then more strongly. 

See Also

By leveraging a weak learner with limited capacity and a modified product of experts training setup, this work shows that dataset biases do not need to be explicitly known or modeled to train models that can generalise significantly better to out-of-distribution examples.

Key Takeaways

  • This work pushes the envelope of automation of bias mitigation in datasets.
  • This work demonstrates that there is no need to know or explicitly model dataset biases to train more robust models that generalize better to out-of-distribution examples.
  • The methods discussed in this work can be used to retain improvements in out-of-distribution settings even if no particular bias is targeted by the biased model.

Finding one-stop solutions to problems that have been piling up due to millions of years of human evolution and progress is almost impossible in a machine learning context. For the models to be unbiased, the ecosystem must be populated with tools that maintain accuracy while requiring the least amount of interference from the human players. Having tools that can automate bias mitigation would be a great addition to the ongoing efforts in the ResponsibleAI research.

Find the original paper here.

What Do You Think?

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
You can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top