About 50 results
Open links in new tab
  1. API Reference — SHAP latest documentation

    This page contains the API reference for public objects and functions in SHAP. There are also example notebooks available that demonstrate how to use the API of each object/function.

  2. An introduction to explainable AI with Shapley values — SHAP latest ...

    We will take a practical hands-on approach, using the shap Python package to explain progressively more complex models. This is a living document, and serves as an introduction to the shap Python …

  3. decision plot — SHAP latest documentation

    SHAP Decision Plots SHAP decision plots show how complex models arrive at their predictions (i.e., how models make decisions). This notebook illustrates decision plot features and use cases with …

  4. shap.Explainer — SHAP latest documentation

    This is the primary explainer interface for the SHAP library. It takes any combination of a model and masker and returns a callable subclass object that implements the particular estimation algorithm …

  5. Image examples — SHAP latest documentation

    Image examples These examples explain machine learning models applied to image data. They are all generated from Jupyter notebooks available on GitHub. Image classification Examples using …

  6. shap.TreeExplainer — SHAP latest documentation

    Uses Tree SHAP algorithms to explain the output of ensemble tree models. Tree SHAP is a fast and exact method to estimate SHAP values for tree models and ensembles of trees, under several …

  7. Basic SHAP Interaction Value Example in XGBoost

    This notebook shows how the SHAP interaction values for a very simple function are computed. We start with a simple linear function, and then add an interaction term to see how it changes the SHAP …

  8. shap.DeepExplainer — SHAP latest documentation

    Meant to approximate SHAP values for deep learning models. This is an enhanced version of the DeepLIFT algorithm (Deep SHAP) where, similar to Kernel SHAP, we approximate the conditional …

  9. Be careful when interpreting predictive models in search of causal ...

    SHAP and other interpretability tools can be useful for causal inference, and SHAP is integrated into many causal inference packages, but those use cases are explicitly causal in nature.

  10. shap.KernelExplainer — SHAP latest documentation

    Uses the Kernel SHAP method to explain the output of any function. Kernel SHAP is a method that uses a special weighted linear regression to compute the importance of each feature.