SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods [1-7] and representing the only possible consistent and locally accurate additive feature attribution method based on expectations (see our papers for details and citations).

References in zbMATH (referenced in 61 articles )

Showing results 1 to 20 of 61.
Sorted by year (citations)

1 2 3 4 next

  1. Baptista, Marcia L.; Goebel, Kai; Henriques, Elsa M. P.: Relation between prognostics predictor evaluation metrics and local interpretability SHAP values (2022)
  2. Bastos, João A.; Matos, Sara M.: Explainable models of credit losses (2022)
  3. Blanquero, Rafael; Carrizosa, Emilio; Molero-Río, Cristina; Romero Morales, Dolores: On sparse optimal regression trees (2022)
  4. Coma-Puig, Bernat; Carmona, Josep: Non-technical losses detection in energy consumption focusing on energy recovery and explainability (2022)
  5. Davila-Pena, Laura; García-Jurado, Ignacio; Casas-Méndez, Balbina: Assessment of the influence of features on a classification problem: an application to COVID-19 patients (2022)
  6. Hurley, Catherine B.; O’Connell, Mark; Domijan, Katarina: Interactive slice visualization for exploring machine learning models (2022)
  7. Kampel, Ludwig; Simos, Dimitris E.; Kuhn, D. Richard; Kacker, Raghu N.: An exploration of combinatorial testing-based approaches to fault localization for explainable AI (2022)
  8. Lellep, Martin; Prexl, Jonathan; Eckhardt, Bruno; Linkmann, Moritz: Interpreted machine learning in fluid dynamics: explaining relaminarisation events in wall-bounded shear flows (2022)
  9. Livshits, Ester; Kimelfeld, Benny: The Shapley value of inconsistency measures for functional dependencies (2022)
  10. Longo, Luigi; Riccaboni, Massimo; Rungi, Armando: A neural network ensemble approach for GDP forecasting (2022)
  11. Ras, Gabrielle; Xie, Ning; van Gerven, Marcel; Doran, Derek: Explainable deep learning: a field guide for the uninitiated (2022)
  12. Van den Broeck, Guy; Lykov, Anton; Schleich, Maximilian; Suciu, Dan: On the tractability of \textscShapexplanations (2022)
  13. Wilming, Rick; Budding, Céline; Müller, Klaus-Robert; Haufe, Stefan: Scrutinizing XAI using linear ground-truth data with suppressor variables (2022)
  14. Aas, Kjersti; Jullum, Martin; Løland, Anders: Explaining individual predictions when features are dependent: more accurate approximations to Shapley values (2021)
  15. Bogaerts, Bart; Gamba, Emilio; Guns, Tias: A framework for step-wise explaining how to solve constraint satisfaction problems (2021)
  16. Burkart, Nadia; Huber, Marco F.: A survey on the explainability of supervised machine learning (2021)
  17. Carrizosa, Emilio; Molero-Río, Cristina; Romero Morales, Dolores: Mathematical optimization in classification and regression trees (2021)
  18. Cheng, Lu; Varshney, Kush R.; Liu, Huan: Socially responsible AI algorithms: issues, purposes, and challenges (2021)
  19. Confalonieri, Roberto; Weyde, Tillman; Besold, Tarek R.; Moscoso del Prado Martín, Fermín: Using ontologies to enhance human understandability of global post-hoc explanations of black-box models (2021)
  20. Delen, Dursun; Zolbanin, Hamed M.; Crosby, Durand; Wright, David: To imprison or not to imprison: an analytics model for drug courts (2021)

1 2 3 4 next