
An introduction to explainable AI with Shapley values — SHAP latest ...
An introduction to explainable AI with Shapley values This is an introduction to explaining machine learning models with Shapley values. Shapley values are a widely used approach from cooperative …
API Reference — SHAP latest documentation
API Reference This page contains the API reference for public objects and functions in SHAP. There are also example notebooks available that demonstrate how to use the API of each object/function. …
shap.Explainer — SHAP latest documentation
Uses Shapley values to explain any machine learning model or python function. This is the primary explainer interface for the SHAP library. It takes any combination of a model and masker and returns …
decision plot — SHAP latest documentation
1 SHAP Decision Plots 1.1 Load the dataset and train the model 1.2 Calculate SHAP values 2 Basic decision plot features 3 When is a decision plot helpful? 3.1 Show a large number of feature effects …
Basic SHAP Interaction Value Example in XGBoost
Basic SHAP Interaction Value Example in XGBoost This notebook shows how the SHAP interaction values for a very simple function are computed. We start with a simple linear function, and then add …
shap.TreeExplainer — SHAP latest documentation
Tree SHAP is a fast and exact method to estimate SHAP values for tree models and ensembles of trees, under several different possible assumptions about feature dependence.
Release notes — SHAP latest documentation
Mar 4, 2026 · Release notes To see the latest changes that are due on the next release, see v0.51.0…master. v0.51.0 Released on 2026-03-04 - GitHub - PyPI What's Changed Fixes fix: check …
Tabular examples — SHAP latest documentation
Tabular examples These examples explain machine learning models applied to tabular data. They are all generated from Jupyter notebooks available on GitHub. Tree-based models Examples …
Be careful when interpreting predictive models in search of causal ...
Be careful when interpreting predictive models in search of causal insights A joint article about causality and interpretable machine learning with Eleanor Dillon, Jacob LaRiviere, Scott Lundberg, Jonathan …
Text examples — SHAP latest documentation
Text examples These examples explain machine learning models applied to text data. They are all generated from Jupyter notebooks available on GitHub. Sentiment analysis Examples of how to …