Inception-v4

Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge


References in zbMATH (referenced in 28 articles )

Showing results 1 to 20 of 28.
Sorted by year (citations)

1 2 next

  1. Amerini, Irene; Anagnostopoulos, Aris; Maiano, Luca; Celsi, Lorenzo Ricciardi: Deep learning for multimedia forensics (2021)
  2. Cauchois, Maxime; Gupta, Suyash; Duchi, John C.: Knowing what you know: valid and validated confidence sets in multiclass and multilabel prediction (2021)
  3. Chung, Eric; Leung, Wing Tat; Pun, Sai-Mang; Zhang, Zecheng: A multi-stage deep learning based algorithm for multiscale model reduction (2021)
  4. Jacquier, Pierre; Abdedou, Azzedine; Delmas, Vincent; Soulaïmani, Azzeddine: Non-intrusive reduced-order modeling using uncertainty-aware deep neural networks and proper orthogonal decomposition: application to flood modeling (2021)
  5. Makinen, T. Lucas; Charnock, Tom; Alsing, Justin; Wandelt, Benjamin D.: Lossless, scalable implicit likelihood inference for cosmological fields (2021)
  6. Salam, Abdulwahed; El Hibaoui, Abdelaaziz: Energy consumption prediction model with deep inception residual network inspiration and LSTM (2021)
  7. Škrlj, Blaž; Martinc, Matej; Lavrač, Nada; Pollak, Senja: autoBOT: evolving neuro-symbolic representations for explainable low resource text classification (2021)
  8. Sun, Caixia; Zou, Lian; Fan, Cien; Shi, Yu; Liu, Yifeng: Enhancing adversarial attack transferability with multi-scale feature attack (2021)
  9. Yang, Hongfei; Ding, Xiaofeng; Chan, Raymond; Hu, Hui; Peng, Yaxin; Zeng, Tieyong: A new initialization method based on normed statistical spaces in deep networks (2021)
  10. He, Juanjuan; Xiang, Song; Zhu, Ziqi: A deep fully residual convolutional neural network for segmentation in EM images (2020)
  11. Karumuri, Sharmila; Tripathy, Rohit; Bilionis, Ilias; Panchal, Jitesh: Simulator-free solution of high-dimensional stochastic elliptic partial differential equations using deep neural networks (2020)
  12. Liu, Li; Ouyang, Wanli; Wang, Xiaogang; Fieguth, Paul; Chen, Jie; Liu, Xinwang; Pietikäinen, Matti: Deep learning for generic object detection: a survey (2020)
  13. Ruehle, Fabian: Data science applications to string theory (2020)
  14. Schmidt-Hieber, Johannes: Nonparametric regression using deep neural networks with ReLU activation function (2020)
  15. Shao, Wenqi; Li, Jingyu; Ren, Jiamin; Zhang, Ruimao; Wang, Xiaogang; Luo, Ping: SSN: learning sparse switchable normalization via SparsestMax (2020)
  16. Sharma, Vipul; Mir, Roohie Naaz: A comprehensive and systematic look up into deep learning based object detection techniques: a review (2020)
  17. Tibo, Alessandro; Jaeger, Manfred; Frasconi, Paolo: Learning and interpreting multi-multi-instance learning networks (2020)
  18. Trekin, A. N.; Ignatiev, V. Yu.; Yakubovskii, P. Ya.: Deep neural networks for determining the parameters of buildings from single-shot satellite imagery (2020)
  19. Wang, Yi; Zhang, Hao; Chae, Kum Ju; Choi, Younhee; Jin, Gong Yong; Ko, Seok-Bum: Novel convolutional neural network architecture for improved pulmonary nodule classification on computed tomography (2020)
  20. Huan, Er-Yang; Wen, Gui-Hua: Multilevel and multiscale feature aggregation in deep networks for facial constitution classification (2019)

1 2 next