
An introduction to explainable AI with Shapley values — SHAP latest ...
We will take a practical hands-on approach, using the shap Python package to explain progressively more complex models. This is a living document, and serves as an introduction to the shap Python …
shap.Explainer — SHAP latest documentation
This is the primary explainer interface for the SHAP library. It takes any combination of a model and masker and returns a callable subclass object that implements the particular estimation algorithm …
Image examples — SHAP latest documentation
Image examples These examples explain machine learning models applied to image data. They are all generated from Jupyter notebooks available on GitHub. Image classification Examples using …
shap.TreeExplainer — SHAP latest documentation
Uses Tree SHAP algorithms to explain the output of ensemble tree models. Tree SHAP is a fast and exact method to estimate SHAP values for tree models and ensembles of trees, under several …
shap.DeepExplainer — SHAP latest documentation
This is an enhanced version of the DeepLIFT algorithm (Deep SHAP) where, similar to Kernel SHAP, we approximate the conditional expectations of SHAP values using a selection of background samples.
Basic SHAP Interaction Value Example in XGBoost
This notebook shows how the SHAP interaction values for a very simple function are computed. We start with a simple linear function, and then add an interaction term to see how it changes the SHAP …
Be careful when interpreting predictive models in search of causal ...
SHAP and other interpretability tools can be useful for causal inference, and SHAP is integrated into many causal inference packages, but those use cases are explicitly causal in nature.
violin summary plot — SHAP latest documentation
The violin summary plot offers a compact representation of the distribution and variability of SHAP values for each feature. Individual violin plots are stacked by importance of the particular feature on …
waterfall plot — SHAP latest documentation
This notebook is designed to demonstrate (and so document) how to use the shap.plots.waterfall function. It uses an XGBoost model trained on the classic UCI adult income dataset (which is …
shap.KernelExplainer — SHAP latest documentation
Kernel SHAP is a method that uses a special weighted linear regression to compute the importance of each feature. The computed importance values are Shapley values from game theory and also …