InterpretML: A Unified Framework for Machine Learning Interpretability- InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability - glassbox models, which are machine learning models designed for interpretability (ex: linear models, rule lists, generalized additive models), and blackbox explainability techniques for explaining existing systems (ex: Partial Dependence, LIME). The package enables practitioners to easily compare interpretability algorithms by exposing multiple methods under a unified API, and by having a built-in, extensible visualization platform. InterpretML also includes the first implementation of the Explainable Boosting Machine, a powerful, interpretable, glassbox model that can be as accurate as many blackbox models. The MIT licensed source code can be downloaded from this http URL.
Keywords for this software
References in zbMATH (referenced in 2 articles )
Showing results 1 to 2 of 2.
- Kacper Sokol; Alexander Hepburn; Rafael Poyiadzi; Matthew Clifford; Raul Santos-Rodriguez; Peter Flach: FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems (2020) not zbMATH
- Hubert Baniecki; Przemyslaw Biecek: modelStudio: Interactive Studio with Explanations for ML Predictive Models (2019) not zbMATH