• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

SeldonIO/alibi: Algorithms for explaining machine learning models

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

SeldonIO/alibi

开源软件地址(OpenSource Url):

https://github.com/SeldonIO/alibi

开源编程语言(OpenSource Language):

Python 99.9%

开源软件介绍(OpenSource Introduction):

Alibi Logo

Build Status Documentation Status codecov PyPI - Python Version PyPI - Package Version Conda (channel only) GitHub - License Slack channel


Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.

If you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project alibi-detect.


Anchor explanations for images


Integrated Gradients for text


Counterfactual examples


Accumulated Local Effects

Table of Contents

Installation and Usage

Alibi can be installed from:

  • PyPI or GitHub source (with pip)
  • Anaconda (with conda/mamba)

With pip

  • Alibi can be installed from PyPI:

    pip install alibi
  • Alternatively, the development version can be installed:

    pip install git+https://github.com/SeldonIO/alibi.git 
  • To take advantage of distributed computation of explanations, install alibi with ray:

    pip install alibi[ray]
  • For SHAP support, install alibi as follows:

    pip install alibi[shap]

With conda

To install from conda-forge it is recommended to use mamba, which can be installed to the base conda enviroment with:

conda install mamba -n base -c conda-forge
  • For the standard Alibi install:

    mamba install -c conda-forge alibi
  • For distributed computing support:

    mamba install -c conda-forge alibi ray
  • For SHAP support:

    mamba install -c conda-forge alibi shap

Usage

The alibi explanation API takes inspiration from scikit-learn, consisting of distinct initialize, fit and explain steps. We will use the AnchorTabular explainer to illustrate the API:

from alibi.explainers import AnchorTabular

# initialize and fit explainer by passing a prediction function and any other required arguments
explainer = AnchorTabular(predict_fn, feature_names=feature_names, category_map=category_map)
explainer.fit(X_train)

# explain an instance
explanation = explainer.explain(x)

The explanation returned is an Explanation object with attributes meta and data. meta is a dictionary containing the explainer metadata and any hyperparameters and data is a dictionary containing everything related to the computed explanation. For example, for the Anchor algorithm the explanation can be accessed via explanation.data['anchor'] (or explanation.anchor). The exact details of available fields varies from method to method so we encourage the reader to become familiar with the types of methods supported.

Supported Methods

The following tables summarize the possible use cases for each method.

Model Explanations

Method Models Explanations Classification Regression Tabular Text Images Categorical features Train set required Distributed
ALE BB global
Anchors BB local For Tabular
CEM BB* TF/Keras local Optional
Counterfactuals BB* TF/Keras local No
Prototype Counterfactuals BB* TF/Keras local Optional
Counterfactuals with RL BB local
Integrated Gradients TF/Keras local Optional
Kernel SHAP BB local

global
Tree SHAP WB local

global
Optional
Similarity explanations WB local

Model Confidence

These algorithms provide instance-specific scores measuring the model confidence for making a particular prediction.

Method Models Classification Regression Tabular Text Images Categorical Features Train set required
Trust Scores BB (1) (2) Yes
Linearity Measure BB Optional

Key:

  • BB - black-box (only require a prediction function)
  • BB* - black-box but assume model is differentiable
  • WB - requires white-box model access. There may be limitations on models supported
  • TF/Keras - TensorFlow models via the Keras API
  • Local - instance specific explanation, why was this prediction made?
  • Global - explains the model with respect to a set of instances
  • (1) - depending on model
  • (2) - may require dimensionality reduction

Prototypes

These algorithms provide a distilled view of the dataset and help construct a 1-KNN interpretable classifier.

Method Classification Regression Tabular Text Images Categorical Features Train set labels
ProtoSelect Optional

References and Examples

Citations

If you use alibi in your research, please consider citing it.

BibTeX entry:

@article{JMLR:v22:21-0017,
  author  = {Janis Klaise and Arnaud Van Looveren and Giovanni Vacanti and Alexandru Coca},
  title   = {Alibi Explain: Algorithms for Explaining Machine Learning Models},
  journal = {Journal of Machine Learning Research},
  year    = {2021},
  volume  = {22},
  number  = {181},
  pages   = {1-7},
  url     = {http://jmlr.org/papers/v22/21-0017.html}
}



鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap