According to Accenture’s 2022 Tech Vision research, only 35% of global consumers trust how organisations implement AI. And 77% think organisations must be held accountable for their misuse of AI. “Responsible AI practice is starting to go mainstream. In fact, Big Tech has large in-house teams and divisions under their Responsible AI practice,” said Nikhil Kurhe, co-founder and CEO, of Finarkein Analytics.
Responsible AI toolkits can make AI applications and systems fair, robust, and transparent. We have made a list of toolkits and resources to help implement Responsible AI.
TensorFlow Federated
TensorFlow Federated (TFF) is an open-source framework for decentralised machine learning. TFF was created to enable open research and experimentation with Federated Learning (FL), a machine learning approach in which a shared global model is trained across many participating clients who keep their training data locally. TFF allows developers to experiment with novel algorithms and simulate the included federated learning algorithms on their models and data. TFF’s building blocks can also be used to implement non-learning computations like federated analytics.
TensorFlow Model Remediation
Model Remediation library offers solutions for ML practitioners working to reduce or eliminate user harm resulting from underlying performance biases while creating and training models.
TensorFlow Privacy
Tensorflow Privacy (TF Privacy) is an open-source library created by Google Research. The library has implementations of commonly used TensorFlow Optimisers for training ML models with DP. The goal is to enable ML practitioners to train privacy-preserving models using standard Tensorflow APIs by changing only a few lines of code. In addition, the differentially private optimisers can be combined with high-level APIs that use the Optimizer class, particularly Keras. The API documentation contains information on all of the Optimizers and models.
AI Fairness 360
IBM’s AI Fairness 360 toolkit is an extensible open-source library that includes techniques developed by the research community to detect and mitigate bias in machine learning models throughout the AI application lifecycle.
Responsible AI Toolbox
Microsoft’s Responsible AI Toolbox is a collection of model and data exploration and assessment user interfaces that enable a better understanding of AI systems. The method can be used to assess, develop, and deploy AI systems in a safe, trustworthy, and ethical manner.
Model Card Toolkit
The Model Card Toolkit (MCT) streamlines and automates the creation of Model Cards, which are machine learning documents that provide context and transparency into the development and performance of a model.
Model cards allow:
- Facilitating information exchange between model builders and product developers.
- Educating users about ML models to make more informed decisions about how to use them (or how not to use them).
- Providing model data for effective public oversight and accountability.
TextAttack
TextAttack is a Python framework for adversarial attacks, adversarial training, and data augmentation in natural language processing (NLP). TextAttack makes testing the robustness of NLP models simple, quick, and painless.
Fawkes
Fawkes is an algorithm and software tool that allows individuals to limit the ability of unknown third parties to track them by constructing facial recognition models from publicly available photos.
Fairlearn
Fairlearn is a Python package that allows AI system developers to assess the fairness of their systems and mitigate any observed unfairness issues. Fairlearn includes mitigation algorithms as well as metrics for model evaluation.
XAI
XAI is an ML library that enables ML engineers and relevant domain experts to analyse the end-to-end solution and identify discrepancies that may result in sub-optimal performance. In general, the XAI library is built using the three steps of explainable machine learning:
- Data analysis
- Model evaluation
- Production monitoring