Active Hackathon

Talking Ethical AI with IBM’s Sameep Mehta

IBM's use of AI is grounded on the principle that AI augments human decision-making.

According to IBM’s Global AI Adoption Index 2021, 86% of global IT professionals strongly or somewhat agree that consumers are more likely to choose services of a company that offers transparency and an ethical framework on how its data and AI models are built, managed, and used. 

“IBM believes that all organisations developing and deploying AI must put people and their interests at the centre of the technology, to see that it is used responsibly, and to help ensure that its benefits are felt by the many, not just a few. And as a technology leader, IBM is advancing thought leadership and public policy around ethical AI,” said Sameep Mehta, IBM Distinguished Engineer and Lead- Data and AI Platforms, IBM Research India.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

In an exclusive interview with Analytics India Magazine, Sameep spoke about how IBM builds trustworthy AI.

Excerpts:

AIM: How does IBM use AI?

Sameep Mehta: We build many AI products and solutions for clients spanning multiple sectors from finance, manufacturing, retail, government, and more. Internally, IBM’s use of AI is grounded on the principle that AI augments human decision-making. For instance, IBM’s HR department leverages AI-based assessment tools to make hiring decisions, measure technical skills to match career opportunities, eliminate manual tasks in benefits administration, payroll, and many others, including performance management. Here, the AI gives recommendations to people – supported by valid data points – enabling them to make more evidence-based hiring decisions or design personalised learning programs for employees.

AIM: What are the AI governance methods, techniques, and frameworks used in IBM?

Sameep Mehta: IBM’s AI governance methods are grounded in the ethical principles of Trust and Transparency to continually build and strengthen trust in technology. The principles make clear that the purpose of AI is to augment human intelligence; data and insights generated from data belong to their creator; and powerful new technologies like AI must be transparent, explainable, and free of bias so that they can be trusted.

To operationalise the responsible and ethical development of AI, IBM has established an AI Ethics Board that discusses, advises, and governs the ethical development and deployment of AI systems by IBM and its clients. IBM also runs an Advocacy Network of ethical technology champions who help promote a culture of ethical, responsible, and trustworthy technology. A company-wide educational curriculum helps educate different stakeholders on the ethical development of AI. IBM operates a multidisciplinary research program to explore the responsible development of AI systems aligned with its values and regularly participates in cross-industry, government, and scientific initiatives and events on AI and ethics.

AIM: What explains the growing conversations around AI ethics, responsibility, and fairness? Why is it important? 

Sameep Mehta: As organisations scale their use of AI, the need to do so in a responsible and governed manner is being driven by several complementary forces in action: brand reputation, anticipated government regulations, AI complexity, and social justice.

Moreover, the broad adoption of AI systems will rely heavily on multiple stakeholders and society-at-large to trust these systems. We trust technology based on our understanding of how it works and our assessment of its fairness, reliability, safety, explainability, transparency, and accountability.

Discussions on AI ethics, responsible use of AI, and its potential impacts are invaluable to drive AI adoption. One way to scale up these efforts is for organisations to participate in cross-industry, government, and scientific collaborations on AI & ethics, which involve diverse stakeholders leading to a more mature point of view around this subject. Another approach is to invest in research programs on AI & Ethics. IBM is involved in both these types of efforts.

AIM: How does IBM ensure adherence to its AI governance policies?

Sameep Mehta: We embed ethical thinking across our work through the IBM AI Ethics Board, infusing our principles and ethical thinking into our business decision-making.

It provides centralised governance and accountability as well as a two-way engagement that promotes and conducts internal education. It is also a mechanism by which IBM holds its employees accountable to our values and commitments to the ethical development and deployment of technology.

For Instance, IBM Garage is a unique framework for collaborating with clients to fast-track innovation using practices like design thinking, agile development, and transforming an idea to a minimal viable product through an iterative process. The co-creation phase of this process includes an exercise, “empathy mapping”, which is all about understanding your users better. Another phase includes “what-if analysis” that prompts the team to consider the potential impact of their solution on users. These phases of the ideation process are examples of ethical considerations.

AIM: How do you mitigate biases in your AI algorithms?

Sameep Mehta: One of the best ways to understand and mitigate bias in AI algorithms is to leverage the latest research work on this subject. Over the years, IBM Research has developed and released tools and resources via open-source to facilitate greater adoption, collaboration, and an industry-wide effort to address and mitigate bias in AI. For instance, the AI Fairness 360 is a toolkit of metrics to check for unwanted bias in data sets and machine learning models and algorithms to mitigate such bias. It includes more than 70 state-of-the-art fairness metrics and ten bias mitigation algorithms for enhancing fairness in AI, as well as educational material and demos that practitioners can readily use.

AIM: Do you have a due diligence process to make sure the data is collected ethically?

Sameep Mehta: Data quality and preparation has been called out as one of the most time-consuming steps in an AI lifecycle because the performance of an ML model is only as good as the training data. Hence, a systematic data quality analysis before building AI/ML models is of utmost importance. Infusing trust into data and AI lifecycle has been a key focus area for us which has led to the development of a state-of-the-art toolkit (APIs on the IBM Developer Hub) that assess not only the data quality but also provides a mechanism to improve data quality before the model training starts so that the downstream AI model can be more accurate, fair, and robust. With this, the AI team has much more trust and confidence in the data quality.

AIM: How do you embed ethical principles in your platforms?

Sameep Mehta: IBM has integrated AI ethics and governance principles into its Cloud Pak for Data platform. This is a cloud-based unified AI platform consisting of a full stack of components for every stage of the AI lifecycle, including built-in governance, purpose-built AI model risk management, and collaboration tools. Examples of these components include Watson™’s Knowledge Catalog, Watson OpenScale™ and Watson Studio. Watson Knowledge Centre organises data for governed use, Watson Studio provides a governance-enabled build platform, and Watson OpenScale delivers automation of governance processes and tests.

As mentioned earlier, IBM also offers its open-source trusted AI toolkits on fairness, explainability, and robustness within its platform. AI Fairness 360 helps examine, report, and mitigate bias in models throughout the AI application lifecycle. AI Explainability 360 includes metrics for explaining a model’s processes and decision-making. AI Adversarial Robustness 360 helps researchers and developers defend and verify AI models against adversarial attacks.

IBM research has open-sourced additional toolkits on AI FactSheets 360 and Uncertainty Quantification 360 toolkits. AI Factsheets 360 helps organisations record various types of metadata about their models (like AI Factsheets), thus enabling AI governance.

AIM: How does IBM protect user data?

Sameep Mehta: We are committed to protecting the privacy of our client’s data, which is fundamental in a data-driven society. At IBM, we believe that our client’s data is their data and their insights are their insights. Hence, a client’s data and insights produced on IBM’s Cloud or from IBM’s AI are owned by IBM’s clients, and they are not required to relinquish their data – nor the insights derived from that data – to have the benefits of IBM’s solutions and service. Moreover, we employ industry-leading security practices to safeguard data. This includes the use of encryption, access control methodologies, and proprietary consent management modules that allow us to restrict access to authorised users and de-identify data according to applicable permissions. 

More Great AIM Stories

Sri Krishna
Sri Krishna is a technology enthusiast with a professional background in journalism. He believes in writing on subjects that evoke a thought process towards a better world. When not writing, he indulges his passion for automobiles and poetry.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM