DeepFool

DeepFool: a simple and accurate method to fool deep neural networks. State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust. DeepFool: a simple and accurate method to fool deep neural networks.


References in zbMATH (referenced in 21 articles )

Showing results 1 to 20 of 21.
Sorted by year (citations)

1 2 next

  1. Koh, Pang Wei; Steinhardt, Jacob; Liang, Percy: Stronger data poisoning attacks break data sanitization defenses (2022)
  2. Liao, Ningyi; Wang, Shufan; Xiang, Liyao; Ye, Nanyang; Shao, Shuo; Chu, Pengzhi: Achieving adversarial robustness via sparsity (2022)
  3. Adcock, Ben; Dexter, Nick: The gap between theory and practice in function approximation with deep neural networks (2021)
  4. Mohammadinejad, Sara; Paulsen, Brandon; Deshmukh, Jyotirmoy V.; Wang, Chao: DiffRNN: differential verification of recurrent neural networks (2021)
  5. Sotoudeh, Matthew; Thakur, Aditya V.: SyReNN: a tool for analyzing deep neural networks (2021)
  6. Tran, Hoang-Dung; Pal, Neelanjana; Manzanas Lopez, Diego; Musau, Patrick; Yang, Xiaodong; Nguyen, Luan Viet; Xiang, Weiming; Bak, Stanley; Johnson, Taylor T.: Verification of piecewise deep neural networks: a star set approach with zonotope pre-filter (2021)
  7. Wei, Xingxing; Guo, Ying; Li, Bo: Black-box adversarial attacks by manipulating image attributes (2021)
  8. Xiang, Zhen; Miller, David J.; Wang, Hang; Kesidis, George: Detecting scene-plausible perceptible backdoors in trained DNNs without access to the training set (2021)
  9. Yang, Pengfei; Li, Jianlin; Liu, Jiangchao; Huang, Cheng-Chao; Li, Renjue; Chen, Liqian; Huang, Xiaowei; Zhang, Lijun: Enhancing robustness verification for deep neural networks via symbolic propagation (2021)
  10. Zhang, Chen; Tang, Zhuo; Zuo, Youfei; Li, Kenli; Li, Keqin: A robust generative classifier against transfer attacks based on variational auto-encoders (2021)
  11. Anirudh, Rushil; Thiagarajan, Jayaraman J.; Kailkhura, Bhavya; Bremer, Peer-Timo: MimicGAN: robust projection onto image manifolds with corruption mimicking (2020)
  12. Calzavara, Stefano; Lucchese, Claudio; Tolomei, Gabriele; Abebe, Seyum Assefa; Orlando, Salvatore: \textscTreant: training evasion-aware decision trees (2020)
  13. Croce, Francesco; Rauber, Jonas; Hein, Matthias: Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks (2020)
  14. Cui, Chunfeng; Zhang, Kaiqi; Daulbaev, Talgat; Gusak, Julia; Oseledets, Ivan; Zhang, Zheng: Active subspace of neural networks: structural analysis and universal attacks (2020)
  15. Huang, Xiaowei; Kroening, Daniel; Ruan, Wenjie; Sharp, James; Sun, Youcheng; Thamo, Emese; Wu, Min; Yi, Xinping: A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability (2020)
  16. Khattar, Sahil; Rama Krishna, C.: Adversarial attack to fool object detector (2020)
  17. Xu, Jian; Liu, Heng; Wu, Dexin; Zhou, Fucai; Gao, Chong-zhi; Jiang, Linzhi: Generating universal adversarial perturbation with ResNet (2020)
  18. Benning, Martin; Celledoni, Elena; Ehrhardt, Matthias J.; Owren, Brynjulf; Schönlieb, Carola-Bibiane: Deep learning as optimal control problems: models and numerical methods (2019)
  19. Dreossi, Tommaso; Donzé, Alexandre; Seshia, Sanjit A.: Compositional falsification of cyber-physical systems with machine learning components (2019)
  20. Fawzi, Alhussein; Fawzi, Omar; Frossard, Pascal: Analysis of classifiers’ robustness to adversarial perturbations (2018)

1 2 next