Advertisement

Active Hackathon

Top Milestones On Explainable AI In 2020

Explainable artificial intelligence is an emerging method for boosting reliability, accountability, and dependence in critical areas. This is done by merging machine learning approaches with explanatory methods that reveal what the decision criteria are or why they have been established and allow people to better understand and control AI-powered tools

Below here, we have discussed some of the important milestones, in no particular order, on explainable AI (XAI) in 2020.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Fairlearn Toolkit by Microsoft

Fairlearn is a popular explainable AI toolkit that enables data scientists as well as developers to evaluate and enhance the fairness of their AI systems. The toolkit has two components, an interactive visualisation dashboard and unfairness mitigation algorithms. They are mainly designed to help with navigating trade-offs between fairness and model performance. The open-source toolkit supports a broad spectrum of fairness metrics for evaluating the impacts of an AI model on diverse groups of people, comprising both classification and regression tasks.

Know more here.

Eraser by Salesforce

Evaluating Rationales And Simple English Reasoning (ERASER) is an explainable AI benchmark by Salesforce that helps in evaluating rationalised natural language processing (NLP) models. The benchmark comprises seven diverse NLP datasets and tasks that include human annotations of explanations as supporting evidence for predictions. 

All the datasets included in ERASER are classification tasks including sentiment analysis, Natural Language Inference, and Question Answering tasks, among others, with a different number of labels, and some have varying class labels. Also, the benchmark focuses on “rationales”, that is, snippets of text extracted from the source document of the task that provides sufficient evidence for predicting the correct output. 

Know more here.

Explainable AI For Adverse Childhood Experiences

In October, researchers from the University of Tennessee Health Science Centre developed an “explainable” AI system known as the Semantic Platform for Adverse Childhood Experiences Surveillance (SPACES). SPACES is an intelligent recommendation system that employs ML techniques to help in screening patients and allocating or discovering relevant resources. 

According to the researchers, the proposed system intends to build rapport with patients by generating personalised questions during interviews while minimising the amount of information that needs to be collected directly from the patient.

Know more here.

WhiteNoise Toolkit by Microsoft

Developed in collaboration with researchers at the Harvard Institute for Quantitative Social Science and School of Engineering, WhileNoise is a differential privacy platform that contains different components for building global differentially private systems. Microsoft open-sourced this tool during the Build 2020 conference with an effort to drive toward more explainable AI systems.

It is an open-source project that is made up of two top-level components, i.e. core and system. The core library includes privacy mechanisms for implementing a differentially private system, and the system library provides tools and services for working with tabular and relational data. 

Know more here.

COVID-Net

Recently, DarwinAI, an explainable AI company, developed COVID-Net and COVIDNet-S in their explainable AI platform. In March this year, COVID-Net is a deep convolutional neural network design tailored for the detection of COVID-19 cases from chest X-ray (CXR) images. Along with the model, the researchers also open-sourced COVIDx, which is an open-access benchmark dataset that had been generated, comprising 13,975 CXR images across 13,870 patient cases.

In September, DarwinAI announced COVIDNet-S, which is a suite of deep learning models designed in their explainable AI platform to assess the disease severity of COVID-19. COVIDNet-S can quantitatively score the geographic and opacity extent in a patient’s lungs by analysing key visual indicators of their chest X-Ray. The system was developed using over 10,000 chest X-Rays, with hundreds of these being COVID-19 positive patients with comprehensive lung disease severity assessments.

More Great AIM Stories

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Our Upcoming Events

Conference, Virtual
Genpact Analytics Career Day
3rd Sep

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR
How Data Science Can Help Overcome The Global Chip Shortage

China-Taiwan standoff might increase Global chip shortage

After Nancy Pelosi’s visit to Taiwan, Chinese aircraft are violating Taiwan’s airspace. The escalation made TSMC’s chairman go public and threaten the world with consequences. Can this move by China fuel a global chip shortage?

Another bill bites the dust

The Bill had faced heavy criticism from different stakeholders -citizens, tech firms, political parties since its inception

So long, Spotify

‘TikTok Music’ is set to take over the online streaming space, but there exists an app that has silently established itself in the Indian market.

[class^="wpforms-"]
[class^="wpforms-"]