AutoAugment

AutoAugment: Learning Augmentation Policies from Data. Data augmentation is an effective technique for improving the accuracy of modern image classifiers. However, current data augmentation implementations are manually designed. In this paper, we describe a simple procedure called AutoAugment to automatically search for improved data augmentation policies. In our implementation, we have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch. A sub-policy consists of two operations, each operation being an image processing function such as translation, rotation, or shearing, and the probabilities and magnitudes with which the functions are applied. We use a search algorithm to find the best policy such that the neural network yields the highest validation accuracy on a target dataset. Our method achieves state-of-the-art accuracy on CIFAR-10, CIFAR-100, SVHN, and ImageNet (without additional data). On ImageNet, we attain a Top-1 accuracy of 83.5% which is 0.4% better than the previous record of 83.1%. On CIFAR-10, we achieve an error rate of 1.5%, which is 0.6% better than the previous state-of-the-art. Augmentation policies we find are transferable between datasets. The policy learned on ImageNet transfers well to achieve significant improvements on other datasets, such as Oxford Flowers, Caltech-101, Oxford-IIT Pets, FGVC Aircraft, and Stanford Cars.


References in zbMATH (referenced in 9 articles )

Showing results 1 to 9 of 9.
Sorted by year (citations)

  1. Ghods, Alireza; Cook, Diane J.: A survey of deep network techniques all classifiers can adopt (2021)
  2. Koo, Bongyeong; Choi, Han-Soo; Kang, Myungjoo: Simple feature pyramid network for weakly supervised object localization using multi-scale information (2021)
  3. Kortylewski, Adam; Liu, Qing; Wang, Angtian; Sun, Yihong; Yuille, Alan: Compositional convolutional neural networks: a robust and interpretable model for object recognition under occlusion (2021)
  4. Liang, Senwei; Khoo, Yuehaw; Yang, Haizhao: Drop-activation: implicit parameter reduction and harmonious regularization (2021)
  5. Mark Weber, Huiyu Wang, Siyuan Qiao, Jun Xie, Maxwell D. Collins, Yukun Zhu, Liangzhe Yuan, Dahun Kim, Qihang Yu, Daniel Cremers, Laura Leal-Taixe, Alan L. Yuille, Florian Schroff, Hartwig Adam, Liang-Chieh Chen: DeepLab2: A TensorFlow Library for Deep Labeling (2021) arXiv
  6. Pittorino, Fabrizio; Lucibello, Carlo; Feinauer, Christoph; Perugini, Gabriele; Baldassi, Carlo; Demyanenko, Elizaveta; Zecchina, Riccardo: Entropic gradient descent algorithms and wide flat minima (2021)
  7. Shaferman, Vitaly; Schwegel, Michael; Gl├╝ck, Tobias; Kugi, Andreas: Continuous-time least-squares forgetting algorithms for indirect adaptive control (2021)
  8. Yu, Suxiang; Zhang, Shuai; Wang, Bin; Dun, Hua; Xu, Long; Huang, Xin; Shi, Ermin; Feng, Xinxing: Generative adversarial network based data augmentation to improve cervical cell classification model (2021)
  9. Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, Colin Raffel: FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence (2020) arXiv