DeCAF

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.


References in zbMATH (referenced in 15 articles )

Showing results 1 to 15 of 15.
Sorted by year (citations)

  1. Yang, Liran; Zhong, Ping: Robust adaptation regularization based on within-class scatter for domain adaptation (2020)
  2. Gao, Depeng; Liu, Jiafeng; Wu, Rui; Cheng, Dansong; Fan, Xiaopeng; Tang, Xianglong: Utilizing relevant RGB-d data to help recognize RGB images in the target domain (2019)
  3. Li, Shan; Deng, Weihong: Reliable crowdsourcing and deep locality-preserving learning for unconstrained facial expression recognition (2019)
  4. Shafieezadeh-Abadeh, Soroosh; Kuhn, Daniel; Esfahani, Peyman Mohajerin: Regularization via mass transportation (2019)
  5. Zhou, Joey Tianyi; Pan, Sinno Jialin; Tsang, Ivor W.: A deep learning framework for hybrid heterogeneous transfer learning (2019)
  6. Flamary, Rémi; Cuturi, Marco; Courty, Nicolas; Rakotomamonjy, Alain: Wasserstein discriminant analysis (2018)
  7. Li, Jun; Chang, Heyou; Yang, Jian; Luo, Wei; Fu, Yun: Visual representation and classification by learning group sparse deep stacking network (2018)
  8. Zheng, Charles; Achanta, Rakesh; Benjamini, Yuval: Extrapolating expected accuracies for large multi-class problems (2018)
  9. Verma, Yashaswi; Jawahar, C. V.: Image annotation by propagating labels from semantic neighbourhoods (2017)
  10. Ganin, Yaroslav; Ustinova, Evgeniya; Ajakan, Hana; Germain, Pascal; Larochelle, Hugo; Laviolette, François; Marchand, Mario; Lempitsky, Victor: Domain-adversarial training of neural networks (2016)
  11. Kouw, Wouter M.; van der Maaten, Laurens J. P.; Krijthe, Jesse H.; Loog, Marco: Feature-level domain adaptation (2016)
  12. Mcdonald, Andrew M.; Pontil, Massimiliano; Stamos, Dimitris: New perspectives on (k)-support and cluster norms (2016)
  13. Zhou, Peicheng; Cheng, Gong; Liu, Zhenbao; Bu, Shuhui; Hu, Xintao: Weakly supervised target detection in remote sensing images based on transferred deep features and negative bootstrapping (2016)
  14. Schmidhuber, Jürgen: Deep learning in neural networks: an overview (2015) ioport
  15. Hoffman, Judy; Rodner, Erik; Donahue, Jeff; Kulis, Brian; Saenko, Kate: Asymmetric and category invariant feature transformations for domain adaptation (2014)