Now Reading
Biased Algorithms, Sexism in AI & Need For Diversity Were 3 Big Takeaways From The Rising 2019


Biased Algorithms, Sexism in AI & Need For Diversity Were 3 Big Takeaways From The Rising 2019


Vaishali Kasture, The Rising 2019

The Rising 2019, held on this International Women’s Day was one of the largest gatherings of women in the field of analytics, data science and AI. The conference hosted a diverse set of speakers who shared their professional experiences and talked about their growth story and challenges faced as women. The keynote began by Saraswathi Ramachandra, Head of Analytics Center of Excellence at Danske IT, who kick-started the talk about how emerging technology today such as AI is turning out to be sexist.

Ramachandra who strongly believes in merging technology with women made a slew of relevant points about how sexism and gender bias in humans is leading to technologies such as AI being biased. She focused on three core points — is there a bias, is AI enabling the bias, what can we do about it?



Saraswathi Ramachandra, Head of Analytics Center of Excellence at Danske IT

Addressing these three core areas she began by highlighting how we often stereotype jobs for women. For instance, if we talk about nurse, homemakers or others, the first thing that comes to our mind is that it would be a woman. “We all have biases and it is this bias that AI inherits and learns from it,” said Ramachandra. This bias is creating a skewed dataset on which these AI models are trained.

Is AI sexist?

She mentioned how some of the most noted chatbots and voice assistants such as Alexa, Cortana or Siri have a female voice. Not just this, but some of the newer applications such as Google Translator, or the use of AI for hiring have also exhibited biased results. The AI recruiting tools by Amazon was recently stopped as it started creating a bias against women candidates.

Similar instances are seen in facial recognition systems or even in making business decisions. For instance, in banking, the systems generate less credit rate of women who earn equal to men and loans of women or single women are often denied as the data on which these are trained is bias. “We have a bias, and AI inherits it,” said Ramachandra.

A similar sentiment was echoed by Vaishali Kasture, Co-Founder at Sonder Connect who has a diverse experience of working with AI on the business side of it. Having worked with AI in areas such as enabling financial inclusion for women or automating businesses processes with AI, she shared a similar point as to how women are often declined a credit score.

“The problem she stated is with the AI models that have been trained on historical data that may not be completely relevant for today’s age. What happened in past is not a prediction of what happens in the future. Historical data is not always right to train ML models,” she said.

For instance, drug trials are failing as they don’t have women insights and data. The same is happening with a lot of other things such as self-driving cars, recruiting etc. She stressed on the need to have a diversity in data.

Smitha Ganesh, Principal Consultant – Data Scientist at ThoughtWorks also spoke about fighting Prejudice in artificial intelligence. She said, “AI isn’t dangerous but the human bias is! One must be cognizant about any unintended consequences of using this technology, “ she said. There may be a compounding effect during the emergence of AI, as the algorithms exhibit self-learning from data.

See Also
Canestra_di_frutta_

How Can Technological Sexism Be Combated?

Both Ramachandra and Kasture believe that the best way for combating sexism is to have diverse data science teams. “It is important to analyse data and not make it a black box. The datasets should be analysed and filtered while ensuring that dataset is large, diverse and accurate,” said Ramachandra.

Ganesh too stressed on a diversity-inclusive mindset for data collection to fight this prejudice and taking AI to an elevated level, recognising the vitality involved with AI training systems.

“If we need to make AI safe in the future it is important to develop a way where AI models can have a blue tick mark, similar to how social media have for verified accounts, only this time the verification should be given by women, said Kasture in closing.


Enjoyed this story? Join our Telegram group. And be part of an engaging community.


Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top