Now Reading
Guide To NICE: An Algorithm To Find Nearest Instance Counterfactual Explanations

Guide To NICE: An Algorithm To Find Nearest Instance Counterfactual Explanations

NICE

NICE is an algorithm that can create Nearest Instance Counterfactual Explanations for heterogeneous tabular data (containing both numerical and categorical variables). It has been recently introduced by Dieter Brughmans and David Martens in April 2021 (research paper). 

Before going into the algorithm’s details, let us look at the meaning of ‘counterfactual explanations’.

What are counterfactual explanations?

The term ‘counterfactual explanations’ refers to a scenario such as “If an event A had not occurred, then event B would not have occurred.” For instance, “If clouds had not existed, there would never be rainfall.”  This statement requires us to think about a hypothetical situation of an atmosphere without clouds which is not possible in reality; hence the name “counterfactual”. 

In simple terms, counterfactuals tell us to perform an action in order to get a required result (such as creating an absence of clouds to inhibit rainfall). If we view this concept in terms of machine learning, ‘action’ would mean modifications in the features used for prediction, while ‘required result’ would mean the expected response of the model. 

How NICE algorithm works?

Consider an example of credit scoring to understand the working of the NICE algorithm. 

NICE explanation

Image source: Research paper

The above plot shows the income and age criteria used to decide whether an individual can get a loan. The person with an income $32,000 and age 39 years is unable to get a loan and hence lies on the left side of the boundary in the plot. While the other three people are eligible for a loan, their records lie on the other side of the boundary. The NICE algorithm tries to find out the minimum possible changes that can be done to the features of an ineligible candidate so that he becomes eligible to get a loan. It can be seen from the tabular data that keeping his age the same, if the person’s income is increased by $8000 (i.e. changed from $32,000 to $40,000), he will become eligible for a loan. This change of $8,000 in the income feature is said to be a ‘counterfactual explanation’ that the NICE algorithm finds out.

The NICE algorithm uses one of the following three properties of a counterfactual explanation for optimum results:

  1. Sparsity – It is the count of features required to be modified to achieve the desired outcome.
  2. Proximity – It refers to the difference between the actual input and the counterfactual instance.
  3. Plausibility – It is the measure of closeness of a counterfactual instance to the whole data. Say, for the tabular data used above, if an explanation states that keeping the income intact, the age of ineligible person can be varied from 39 to 140, then such an explanation is implausible as it is too far from the data manifold.

Practical implementation

Here’s a demonstration of implementing the NICE algorithm on the Adult dataset. The following code is tested using Google colab with Python 3.7.10, pmlb 1.0.1.post3 and NICEx 0.1.0 versions. Step-wise implementation of the code is as follows:

  1. Install NICEx library from PyPI

!pip install NICEx

  1. Install pmlb – a Python wrapper for the Penn Machine Learning Benchmark (PMLB) data repository from PyPI. PMLB is a benchmark suite for comparing the performance of various ML algorithms on a variety of datasets. Refer to the following papers for a detailed understanding of counterfactual explanations for ML:

!pip install pmlb

  1. Import required libraries and modules.
 import pandas as pd
 #To fetch a dataset from the PMLB
 from pmlb import fetch_data
 from sklearn.model_selection import train_test_split
 from sklearn.ensemble import RandomForestClassifier
 from sklearn.compose import ColumnTransformer
 from sklearn.preprocessing import StandardScaler,OneHotEncoder
 from sklearn.pipeline import Pipeline
 from nice.explainers import NICE 
  1. Fetch the adult dataset

adult = fetch_data('adult')

Display the dataset

adult

Output:

NICE data
  1. Select feature columns and target column.
 #Drop the unnecessary columns for feature set
 X = adult.drop(columns=['education-num','fnlwgt','target','native-country'])
 #Assign ‘target’ column with all rows as labels
 y = adult.loc[:,'target']
 #List of columns remaining in X will form the feature set
 feature_names = list(X.columns) 
  1. Extract values of features and label columns
 X = X.values  
 X   #Display updated X 

Output:

 y=y.values
 y   #Display updated y 

Output:   array([1, 1, 1, …, 1, 1, 0])

Check the shape of X and y.

X.shape

Output: (48842, 11)

y.shape

Output: (48842,)

  1. Split the data into train set and test set with train:test ratio of 70:30.

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)

  1. Display the feature column names.

print(feature_names)

Output:

['age', 'workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week']

Create lists of column numbers (starting from 0) fro numerical and categorical columns.

 categorical_features = [1,2,3,4,5,6,7]
 numerical_features = [0,8,9,10] 
  1. Using the Pipeline() method of sklearn, we can sequentially apply several transforms. After creating the pipeline, the next step must be to fit the pipeline to the training data.
 #Create pipeline
 clf = Pipeline([
 #list of (name,transform) tuples
     ('PP',ColumnTransformer([
             ('num',StandardScaler(),numerical_features),
             ('cat',OneHotEncoder(handle_unknown = 'ignore'),categorical_features)])),
  #estimator to be used specified as (name, estimator function) tuple
     ('RF',RandomForestClassifier())]) 

ColumnTransformer() here applies standardization to the numerical features and creates on-hot representation of the categorical features. The transforms will be applied in the order in which they are specified so the estimator is kept at the end of the pipeline.

Fit the transforms to the training data.

clf.fit(X_train,y_train)

Output:

NICE pipeline
  1. Create a lambda function to predict the class based on the value of each attribute. Its output will be an array indicating probabilities based on each attribute.

prediction = lambda x: clf.predict_proba(x)

  1. Initialize the counterfactual explanations
 NICE_model = NICE(optimization='sparsity',    #optimization method
                   justified_cf=True) 

Setting the ‘justified_cf’ parameter to True will limit the search of the nearest neighbors to the rightly classified training samples.

  1. Fit the model to training data.
 NICE_model.fit(X_train = X_train,   #training data
                predict_fn=prediction,    #prediction function
                y_train = y_train,      #train set labels
                #categorical features 
                cat_feat=categorical_features,
                #numerical features   
                num_feat=numerical_features)   
  1. Use the explain() method to create a counterfactual instance of a test sample.

Create and then display the counterfactual explanation of the 2nd sample from the test set.

 CF = NICE_model.explain(X_test[1:2,:])
 CF 

Output:

array([[45.,  4.,  9.,  2.,  7.,  0.,  4.,  1.,  0.,  0., 40.]])

Output’s interpretation:

The values of 2nd sample of X_test are:

array([[41.,  4., 15.,  2., 12.,  0.,  4.,  1.,  0.,  0., 35.]])

The target label for this sample is 1. The NICE algorithm examines the values of the instance and creates another set of similar values for each attribute, resulting in a different output class, i.e., 0. As explained in the initial part of the article which explains the meaning of ‘counterfactuals’ in ML; here, the feature values got changed by the algorithm in such a way that we get the result of class 0 as the target label. The output thus justifies the meaning of “counterfactual” – it shows if the input sample had not existed, which nearest attribute values would have given the required label outcome.

References


Join Our Telegram Group. Be part of an engaging online community. Join Here.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top