Microsoft, during its Build developer conference, has put a strong emphasis on machine learning, along with plenty of new tools and features that the company is going to work on for building more responsible and fairer AI systems — both in the Azure cloud and Microsoft’s open-source toolkits.
The company stated that these new tools would be utilised for differential privacy and for creating a system that would ensure that models are working well across different groups of people. Further, these new tools would enable businesses to make the best use of their data while still following strict regulatory requirements.
During the announcement, Microsoft stated that, as developers are increasingly tasked to learn how to build artificial intelligence models, the developers regularly end up asking about the system explainability and its compliance with non-discrimination and privacy regulations. And for that, developers would require tools that can help them better interpret their models’ results.
One of the tools is interpretML, launched a while ago, and also the Fairlearn toolkit, which can be used by developers to assess the fairness of machine learning models — currently available as an open-source tool and will be built into Azure Machine Learning next month.
Microsoft also noted that with regards to differential privacy, which allows developers to get insights from private data while still protecting private information, The company announced WhiteNoise, developed in partnership with Harvard’s Institute for Quantitative Social Science. This new open-source toolkit that’s available both on GitHub and on Azure Machine Learning.