Grad-CAM

Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. We propose a technique for producing ”visual explanations” for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting important regions in the image for predicting the concept. Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers, (2) CNNs used for structured outputs, (3) CNNs used in tasks with multimodal inputs or reinforcement learning, without any architectural changes or re-training. We combine Grad-CAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes, (b) are robust to adversarial images, (c) outperform previous methods on localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, we show that even non-attention based models can localize inputs. We devise a way to identify important neurons through Grad-CAM and combine it with neuron names to provide textual explanations for model decisions. Finally, we design and conduct human studies to measure if Grad-CAM helps users establish appropriate trust in predictions from models and show that Grad-CAM helps untrained users successfully discern a ’stronger’ nodel from a ’weaker’ one even when both make identical predictions.


References in zbMATH (referenced in 21 articles , 1 standard article )

Showing results 1 to 20 of 21.
Sorted by year (citations)

1 2 next

  1. Ras, Gabrielle; Xie, Ning; van Gerven, Marcel; Doran, Derek: Explainable deep learning: a field guide for the uninitiated (2022)
  2. Bi, Xin; Zhang, Chao; He, Yao; Zhao, Xiangguo; Sun, Yongjiao; Ma, Yuliang: Explainable time-frequency convolutional neural network for microseismic waveform classification (2021)
  3. Bogaerts, Bart; Gamba, Emilio; Guns, Tias: A framework for step-wise explaining how to solve constraint satisfaction problems (2021)
  4. Burkart, Nadia; Huber, Marco F.: A survey on the explainability of supervised machine learning (2021)
  5. Carmichael, Iain; Calhoun, Benjamin C.; Hoadley, Katherine A.; Troester, Melissa A.; Geradts, Joseph; Couture, Heather D.; Olsson, Linnea; Perou, Charles M.; Niethammer, Marc; Hannig, Jan; Marron, J. S.: Joint and individual analysis of breast cancer histologic images and genomic covariates (2021)
  6. Cozman, Fabio Gagliardi; Munhoz, Hugo Neri: Some thoughts on knowledge-enhanced machine learning (2021)
  7. Ghadai, Sambit; Lee, Xian Yeow; Balu, Aditya; Sarkar, Soumik; Krishnamurthy, Adarsh: Multi-resolution 3D CNN for learning multi-scale spatial features in CAD models (2021)
  8. Guillaume Jaume, Pushpak Pati, Valentin Anklin, Antonio Foncubierta, Maria Gabrani: HistoCartography: A Toolkit for Graph Analytics in Digital Pathology (2021) arXiv
  9. Hoyt, Christopher; Owen, Art B.: Efficient estimation of the ANOVA mean dimension, with an application to neural net classification (2021)
  10. Kiermayer, Mark; Weiß, Christian: Grouping of contracts in insurance using neural networks (2021)
  11. Li, Tianlin; Liu, Aishan; Liu, Xianglong; Xu, Yitao; Zhang, Chongzhi; Xie, Xiaofei: Understanding adversarial robustness via critical attacking route (2021)
  12. Müller, Heimo; Holzinger, Andreas: Kandinsky patterns (2021)
  13. Olson, Matthew L.; Khanna, Roli; Neal, Lawrence; Li, Fuxin; Wong, Weng-Keen: Counterfactual state explanations for reinforcement learning agents via generative deep learning (2021)
  14. Qi, Zhongang; Khorram, Saeed; Fuxin, Li: Embedding deep networks into visual explanations (2021)
  15. Wu, Mike; Parbhoo, Sonali; Hughes, Michael C.; Roth, Volker; Doshi-Velez, Finale: Optimizing for interpretability in deep neural networks with tree regularization (2021)
  16. Xiang, Zhen; Miller, David J.; Wang, Hang; Kesidis, George: Detecting scene-plausible perceptible backdoors in trained DNNs without access to the training set (2021)
  17. Huang, Xiaowei; Kroening, Daniel; Ruan, Wenjie; Sharp, James; Sun, Youcheng; Thamo, Emese; Wu, Min; Yi, Xinping: A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability (2020)
  18. Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, Orion Reblitz-Richardson: Captum: A unified and generic model interpretability library for PyTorch (2020) arXiv
  19. Ren Pang, Zheng Zhang, Xiangshan Gao, Zhaohan Xi, Shouling Ji, Peng Cheng, Ting Wang: TROJANZOO: Everything you ever wanted to know about neural backdoors (but were afraid to ask) (2020) arXiv
  20. Ghadai, Sambit; Balu, Aditya; Sarkar, Soumik; Krishnamurthy, Adarsh: Learning localized features in 3D CAD models for manufacturability analysis of drilled holes (2018)

1 2 next