MITB Banner

Watch More

AI for Good: Creating Responsible and Ethically Aligned Solutions in Times of Disruption

“The key lies in user-centricity. We need to think beyond just the algorithms and be accountable to those whose lives are directly or indirectly impacted by our solutions.”
AI for Good: Creating Responsible and Ethically Aligned Solutions in Times of Disruption

Design by AI for Good: Creating Responsible and Ethically Aligned Solutions in Times of Disruption

Listen to this story

Chandramauli Chaudhuri leads the AI and Machine Learning initiatives across Fractal’s Tech Media & Entertainment practice, working with senior business stakeholders in some of the leading global enterprises. Besides being involved in AI research, capability enhancements, and solution deployments, he also collaborates with World Economic Forum execs, academicians, and policymakers as an active member of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. He is also a part of Planet Positive 2030, a parallel IEEE Standards Association initiative to inspire policy creation and drive pragmatic changes for a sustainable future.

Analytics India Magazine interviewed Chandramauli to gain insights into designing ethically aligned and responsible AI solutions.

AIM: Why is it important in today’s world for AI platforms or services to be Responsible and Ethically aligned?

Chandramauli: The field of AI has witnessed unprecedented progress in recent times. In the last few months, solutions like ChatGPT, Bard, CICERO, Midjourney and a few others have captured the imagination of researchers and users alike. The potential of such developments and their game-changing applications merits forward-thinking, nurturing and continued investments. This, however, must not make us completely oblivious to its potential risks or harmful effects on our society. As users, practitioners, and developers of a revolutionary technology like AI, it is in our collective best interest to think about how our solutions may affect the different aspects of our lives including privacy, safety, and overall well-being. Assess and evaluate the long-term effects it can have on the economy, society, and the environment. 

Quite understandably, many of these aspects are still unclear and demand further research and evaluation. But having a strong commitment towards ethical and responsible innovation right from the outset can help establish critical guardrails which prevent negative implications in the future – mitigate bias, encourage social fairness, environmental sustainability, and self-determination at individual levels. The main objective is to be aware and vigilant without resorting to pessimism or disregarding the significance of such scientific innovation. With all its promise, we must understand that AI systems can only truly be successful when it is designed to serve a greater purpose – human-centricity and global welfare. It will be a failure on our part if AI systems are built in isolation as mere self-serving tools for reaching short-term goals for a select group of beneficiaries. From this perspective, defining holistic ethical and responsible frameworks can help us identify and maintain that fine equilibrium between immediate needs and long-term purposes without restoring to extreme and possibly impractical ideas like imposing bans on AI development.

AIM: What are the best practices for enabling ethical alignment and responsibility in AI solutions across organisations and for project lifecycles?

Chandramauli: Clearly defining the accountability, explainability and ownership of AI systems is often the hardest and most critical step. It means having answers to some tough and fundamental questions – why does our solution recommend the outcomes that it does, how does it arrive at these recommendations, who is accountable for any harmful action that the solution may generate, and what will it take to reverse such malicious results? 

For organisations investing in developing new AI technology, it starts from the top. A few good starting points would be to establish a dynamic and diverse work culture, incorporate an experimentation, and learning mindset, and define inclusivity and accessibility guidelines. Once these pillars are well-established, other downstream and project-level best practices like data and model governance, solution transparency, usability testing, post-deployment monitoring and tracking, etc. are much easier to implement. The onus lies with the senior leaders, and execs to set the right examples at the enterprise level and ensure every individual involved in building such systems, is educated, trained, and motivated to prioritise such considerations. 

AIM: How can the design and development of AI solutions be made more inclusive and accessible to a diverse range of stakeholders and users?

Chandramauli: An inclusive and accessible approach to AI means ensuring that the benefits of its applications are available to everyone, regardless of their background or abilities. Many AI organizations strive to produce products and services which are inclusive, responsible, and ethical – initially, these look good in theory but become quite challenging when it comes to actual deployment. This is because it requires a fundamental shift in mindset and thought process.

The key lies in user-centricity. We need to think beyond just the algorithms and be accountable to those whose lives are directly or indirectly impacted by our solutions. At the end of the day, we need to remember that AI is just the means of serving our consumers and not the end goal or success by itself.

At an organisational level, it implies re-considering many of the key decisions in the overall process and enabling backward decisioning – the data to be collected, the analysis to be performed, the models to be used, the outcomes to be recommended, the tests to be used and so on. From a broader perspective, it drives the need for greater collaboration and transparency among researchers, governments, and corporations. The goal should be to regulate the design and development of products to make those safer without slowing down the progress of AI research and knowledge-sharing.

Chandramauli: That’s a great question. From a societal standpoint, subjects like science, technology, public policy, economics, etc. are not really independent of each other. These are all fundamentally tied together in defining the communal fabric that we live in. The same applies to the relationship between AI and the law – the legal and regulatory infrastructure guides the development of technology, and vice-versa. 

Just to elaborate on this a little more – on one hand, we have adoption of responsible AI systems within the legal frameworks. This is where AI can help create new rules and regulations, improve the functioning and efficiency of the legal systems, and enhance its ability to contribute to overall human well-being. And on the other hand, we have active debates around providing legal status to AI systems. This involves the assignment of restrictions, rights, and obligations towards the development and deployment of such systems. An important aspect which has recently garnered a lot of attention is the ownership and copyright laws related to content generated by AI. Notably, systems like DALL-E, ChatGPT, etc. can sometimes exclude citations or attributions to the original sources. This naturally raises a lot of questions and concerns about the infringement of intellectual property rights from large sections of society. 

It goes without saying that creating well-defined legal frameworks to address such sensitive and critical subjects is still in the early stages of infancy. But, with time and the right intentions, we shall hopefully experience a future where the legal systems and AI work in harmony to refine, resolve and enhance many such moral dilemmas. The ‘Ethics Guidelines for Trustworthy AI’ and the subsequent ‘Artificial Intelligence Act’ proposed by the European Commission in 2019 and 2021, respectively, along with a number of standardization programs developed by IEEE are notable milestones in this endeavour.

Chandramauli: This is a particularly relevant topic considering the latest developments. But before going into the ethical considerations, it is important to first understand what Generative Pre-trained Transformers or GPTs really are. 

Solutions like ChatGPT and more recently GPT4 are built on top of massive Large Language Models i.e., Deep Learning architectures comprising billions to possibly even trillions of parameters trained on a massive corpus of text data to produce human-like conversational outputs. While the results are exceptionally creative and awe-inspiring, it also makes such systems almost completely black box under the hood, leading to concerns about the traceability, accuracy, and dependability of the information it produces. With the growing proficiency and eloquence of these models, they can easily be used to propagate conspiracy theories, generate toxicity, and fabricate lies, if left unregulated. 

Moreover, as such technologies gain widespread popularity and get access to sensitive personal information, it raises some questions about how responsibly the user data is collected, stored, and used. After all, wherever there is mass storage of sensitive information, there are concerns about cybersecurity and privacy. It is fair to say that the companies investing in Generative AI and similar technology, would do well to pay extra attention to the overall trust, security, and reliability of such services. Without these measures in place, it can lead to a loss of faith among the users, potentially damaging the ambitions and potential of AI.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Poulomi Chatterjee

Poulomi Chatterjee

Poulomi is a Technology Journalist with Analytics India Magazine. Her fascination with tech and eagerness to dive into new areas led her to the dynamic world of AI and data analytics.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories