At the biggest Women in AI conference, The Rising 2022, Kamakshi Anantharaman, the global head of Google Cloud Platform Alliance at Quantiphi, spoke about the inherent biases and steps we can take to mitigate them – a much-needed leap for women in society, allowing them not just to survive but thrive and rise in the world of artificial intelligence (AI).
Anantharaman has close to 16+ years of experience navigating through diverse verticals of engineering, client engagement, sales and delivery across healthcare, advertising, retail, and consumer banking. She is passionate about implementing AI to solve complex real-world problems. She has successfully built and managed cross capability teams demonstrating a singular focus on assisting clients through their digital transformation journey.
Watch all the recorded sessions of Rising 2022 here>>
Pointing to the National Family Health Survey released in March 2022, which stated that India has 1020 females per 1000 males, Anantharaman said: “That’s bias right there.”
According to the 2020 World Economic Forum, nearly 26 per cent of data and AI positions are held by the women workforce. Another study conducted by Stanford Institute for Human-Centred AI’s 2021 AI Index Report stated that nearly 16 per cent of tenure-track faculty focused on AI globally.
“There are only 30 per cent women in AI,” said Anantharaman. Moreover, she said that across professional clusters like data computing, engineering, and data and AI, there are less than 25 per cent of women. Also, the number of women users on platforms such as datasciencecentral, Kaggle, OpenML, and StackOverflow is less than 20 per cent.
“What exactly is bias?” asked Anantharaman. She said that it is an inherently uncertain and unwanted outcome that arises from an algorithm because of something that had clicked in the data or a prejudiced assumption made when we were designing the algorithm. In other words, it is an anomaly in the output of machine learning algorithms made during the algorithm development process or in the training data.
Citing various anomalies, she said, in 2019, a man and wife applied for the same credit card. But the credit card company set the woman’s credit limit at almost half compared to the man’s credit limit. Similarly, in 2015, a large tech organisation realised their AI recruiting system showed bias against women candidate screening due to male dominance across tech in historical data used to train the model.
Further, she said, in 2019, a social networking site tried personalised adverts; the algorithm prioritised women in job adverts for nursing or secretarial work and men for taxi drivers, janitors, etc.
Types of biases and how do we mitigate them?
Anantharaman said that there are three types of biases: algorithmic bias, data bias, and human bias. “Why does it exist?” she asked. Answering the same, she said that the entire ecosystem is designed by humans, which is something we have to live with.
Further, she said we should take necessary measures to mitigate these biases across the ML lifecycle or project and suggested the following approaches –
- Examine context
- Focus on explainable AI
- Work on complete and representative data
- Deepen research on bias and model fairness
- Build a rigorous testing regime
- Educate and govern
A huge leap with a small leap
Anantharaman said that there is a need to have female role models at the ground level; creating that intervention will help foster the growth of the women in that ecosystem. “This is where the call to action comes,” she added, saying that we should encourage each other to engage, educate, empower and endorse. “That’s the call for action!” concluded Anantharaman.