Torch CamClass activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM)
CaptumModel interpretability and understanding for PyTorch
ExplainxExplainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
ImodelsInterpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Pyss3A Python package implementing a new machine learning model for text classification with visualization tools for Explainable AI
ShapA game theoretic approach to explain the output of any machine learning model.
Modelstudio📍 Interactive Studio for Explanatory Model Analysis
Lrp for lstmLayer-wise Relevance Propagation (LRP) for LSTMs
Visual AttributionPytorch Implementation of recent visual attribution methods for model interpretability
Pycebox⬛ Python Individual Conditional Expectation Plot Toolbox
Interpretability By PartsCode repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
CxplainCausal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Cnn Interpretability🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
AthenaAutomatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Text nnText classification models. Used a submodule for other projects.
Symbolic MetamodelingCodebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
AlibiAlgorithms for monitoring and explaining machine learning models
Grad Cam[ICCV 2017] Torch code for Grad-CAM
DalexmoDel Agnostic Language for Exploration and eXplanation
Tf ExplainInterpretability Methods for tf.keras models with Tensorflow 2.x
Ad examplesA collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
XaiXAI - An eXplainability toolbox for machine learning
FlashtorchVisualization toolkit for neural networks in PyTorch! Demo -->
Xai resourcesInteresting resources related to XAI (Explainable Artificial Intelligence)
TcavCode for the TCAV ML interpretability project
LucidA collection of infrastructure and tools for research in neural network interpretability.
InterpretFit interpretable models. Explain blackbox machine learning.
FacetHuman-explainable AI.
Pytorch Grad CamMany Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
diabetes use caseSample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
SPINECode for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
shapeshopTowards Understanding Deep Learning Representations via Interactive Experimentation
knowledge-neuronsA library for finding knowledge neurons in pretrained transformer models.
neuron-importance-zsl[ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance
summit🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
sageFor calculating global feature importance using Shapley values.
yggdrasil-decision-forestsA collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
zennitZennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
deep-explanation-penalizationCode for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584