MITB Banner

Microsoft Introduces New Resources & Tools To Help Implement AI Responsibly

Microsoft has launched new tools and guidelines to enable product leaders build AI responsibly from research to practice

Share

In collaboration with Boston Consulting Group (BCG), Microsoft has introduced guidelines for product leaders that are designed to help prompt important conversations about how to put responsible AI principles to work. This guidance is distinct from Microsoft’s internal processes and reflects perspectives from both organizations. Microsoft has also built tools to help ML practitioners identify issues, diagnose causes and mitigate problems before deploying apps.

“Moving from principles to practices is difficult, given the complexities, nuances and dynamics of AI systems and applications. There are no quick fixes and no silver bullet that address all risks with applications of AI technologies. But we can make headway by harnessing the best of research and engineering to create tools aimed at the responsible development and fielding of AI technologies,” wrote Eric Horvitz, Chief Scientific Officer at Microsoft, in a blog post.

The ten guidelines are grouped into three phases:

  • Assess and prepare: Evaluate the product’s benefits, the technology, the potential risks, and the team.
  • Design, build, and document: Review the impacts, unique considerations, and the documentation practice.
  • Validate and support: Select the testing procedures and the support to ensure products work as intended.

Along with these, the company has released a Responsible AI dashboard that surfaces Error Analysis, Fairlearn, InterpretML, DiCE and EconML functionalities into one pane of glass to assist AI developers with fairness, interpretability and reliability of AI models. Within the dashboard, the tools can communicate with each other and show insights in one interactive canvas for an end-to-end debugging and decision-making experience.

The open-source tools that Microsoft has built include:

  • Error Analysis: Analyses and diagnoses model errors
  • Fairlearn: Assesses and mitigates fairness issues in AI systems
  • InterpretML: Provides inspectable machine-learned models to enhance debugging of data and inferences
  • DiCE: Enables counterfactual analysis for debugging individual predictions
  • EconML: Helps decision-makers deliberate about the effects of actions in the world using causal inference
  • HAX Toolkit: Guides teams through creating fluid and responsible human-AI collaborative experiences
Share
Picture of Meeta Ramnani

Meeta Ramnani

Meeta’s interest lies in finding out real practical applications of technology. At AIM, she writes stories that question the new inventions and the need to develop them. She believes that technology has and will continue to change the world very fast and that it is no more ‘cool’ to be ‘old-school’. If people don’t update themselves with the technology, they will surely be left behind.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.