In collaboration with Boston Consulting Group (BCG), Microsoft has introduced guidelines for product leaders that are designed to help prompt important conversations about how to put responsible AI principles to work. This guidance is distinct from Microsoft’s internal processes and reflects perspectives from both organizations. Microsoft has also built tools to help ML practitioners identify issues, diagnose causes and mitigate problems before deploying apps.
“Moving from principles to practices is difficult, given the complexities, nuances and dynamics of AI systems and applications. There are no quick fixes and no silver bullet that address all risks with applications of AI technologies. But we can make headway by harnessing the best of research and engineering to create tools aimed at the responsible development and fielding of AI technologies,” wrote Eric Horvitz, Chief Scientific Officer at Microsoft, in a blog post.
Sign up for your weekly dose of what's up in emerging technology.
The ten guidelines are grouped into three phases:
- Assess and prepare: Evaluate the product’s benefits, the technology, the potential risks, and the team.
- Design, build, and document: Review the impacts, unique considerations, and the documentation practice.
- Validate and support: Select the testing procedures and the support to ensure products work as intended.
Along with these, the company has released a Responsible AI dashboard that surfaces Error Analysis, Fairlearn, InterpretML, DiCE and EconML functionalities into one pane of glass to assist AI developers with fairness, interpretability and reliability of AI models. Within the dashboard, the tools can communicate with each other and show insights in one interactive canvas for an end-to-end debugging and decision-making experience.
The open-source tools that Microsoft has built include:
- Error Analysis: Analyses and diagnoses model errors
- Fairlearn: Assesses and mitigates fairness issues in AI systems
- InterpretML: Provides inspectable machine-learned models to enhance debugging of data and inferences
- DiCE: Enables counterfactual analysis for debugging individual predictions
- EconML: Helps decision-makers deliberate about the effects of actions in the world using causal inference
- HAX Toolkit: Guides teams through creating fluid and responsible human-AI collaborative experiences