AI is penetrating every aspect of our lives, directly impacting how we function as a society. Incidents such as Google-Timnit Gebru fiasco, Cambridge Analytica, Solarwinds hack, and surveillance overreach are the result of neglecting the ethical side of AI over the years.
Unfortunately, as a 2018 MIT study puts it, “there’s a big gap between how AI can be used and how it should be used”. The study stated while 30 percent of large companies in the US are actively deploying AI applications, not many of them have a concrete plan to ensure their ethical fairness.
The role of an AI ethicist has never been more important. We caught up with Aparna Ashok, an AI Ethics researcher and a tech anthropologist, to understand her work around adopting a ‘ humanity-first approach when designing emerging technology services’. Aparna currently runs Ethics Sprint, a platform helping technology companies embed the ethics component into their product development process.
Aparna was named in the Lighthouse3’s list of ‘100 Brilliant Women in AI Ethics’ for 2020 and is also on the Advisory Group of Wellcome Trust’s Data Labs, a strategic initiative of the UK’s largest charitable foundation.
How It All Started
Aparna started her career consulting for companies on building responsible businesses. Back in 2015, she worked with an impact organisation in India delivering healthcare to low-income communities. Part of her work involved designing an electronic health record and a training platform. The experience led her to combine her anthropology background with technology.
“Technology anthropology consists of two things — understanding human needs and converting them into a technological product and studying the macro of how these technological interventions change our everyday life. My work as a tech anthropologist revolves around the study of the interaction between people and digital solutions, the changing nature of technology and its impacts on society.”
Aparna did her Master’s thesis on ‘Anticipatory Ethics for AI’. Interviewing tech practitioners as part of her research, gave Aparna perspective on risks and opportunities at the intersection of AI and society.
“AI is a powerful, influential tool with far-reaching and real implications on society and individuals. Understanding its ubiquitousness, – we are all already subjected to data-driven recommendations and profiling – and the fact that it is only going to be more prevalent is what drew me to AI ethics. How AI systems change our future depends on the people and policies that guide their implementations. My master’s thesis showed that owners and technologists working on them, even when they have good intentions, are not able to see the implications of the technical decisions they make. While this is true for a lot of technology, the opaque nature, decision-making capability and self-learning capacity of automated decision-making systems makes this an urgent and critical matter to be considered,” she said.
In 2018, Aparna developed a framework for ‘Ethical Principles for Humane Technology’. “I developed and refined this framework to create a common language to reflect on humanity-related implications within the product design process.” In her report, Aparna noted that while companies race to use a large amount of data, analytics, and computational power to build accurate systems, the crucial aspect of ‘real-life consequences of these decisions on living, breathing human beings’ is often overlooked.
The framework lists six lenses to understand the impact of AI on human life.
Credit: Aparna Ashok
Well-being: It refers to aligning the system goals to serve in the best interests of humanity. It can be achieved by keeping the user informed of system goals, designing systems that enable competency and connection, and building an overall business model to support human outcomes.
Inclusion: As the name suggests, it is about embracing diversity and creating a sense of belonging. Inclusion in system design could be achieved through mapping and accounting for diverse capabilities of users, representing different groups of users in algorithm training, incorporating representatives from the target group in the team.
Privacy: Making sure the information collected, analysed, processed, and shared honours the user’s ownership.
Security: Protecting user’s psychological, emotional, intellectual, digital and physical safety.
Accountability: It refers to creating transparency in decision making, addressing biases, and giving users an opportunity to challenge decisions.
Trust: Creating a reliable environment that promotes ‘authentic engagement’.
“What is harder is putting these into practice within quick product design cycles. The important thing to remember is that whatever role you work within technology, you have the responsibility to educate yourself on the harms as well as the benefits of what you are working on. And you have a voice that can be used to question objectives and advocate for those affected by your solution who don’t have a voice,” said Aparna.
“The adoption of AI ethics and a humanity-first approach leads to Responsible AI. This refers to automated self-learning systems that are built with the context in mind, at a minimum they fulfil human rights requirements, and where possible they are clear about improving social development targets (which can be measured through the UN’s sustainable development goals, amongst others). India is aspiring to these standards shown by Niti Aayog’s latest AI strategy – Responsible AI for all,” said Aparna.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
Join Our Telegram Group. Be part of an engaging online community. Join Here.
I am a journalist with a postgraduate degree in computer network engineering. When not reading or writing, one can find me doodling away to my heart’s content.