CIFAR

The CIFAR-10 and CIFAR-100 are labeled subsets of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class. The CIFAR-100 dataset: This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a ”fine” label (the class to which it belongs) and a ”coarse” label (the superclass to which it belongs).


References in zbMATH (referenced in 167 articles )

Showing results 1 to 20 of 167.
Sorted by year (citations)

1 2 3 ... 7 8 9 next

  1. Baskerville, Nicholas P.; Keating, Jonathan P.; Mezzadri, Francesco; Najnudel, Joseph: A spin glass model for the loss surfaces of generative adversarial networks (2022)
  2. Chen, Qipin; Hao, Wenrui; He, Juncai: A weight initialization based on the linear product structure for neural networks (2022)
  3. Jain, Niharika; Olmo, Alberto; Sengupta, Sailik; Manikonda, Lydia; Kambhampati, Subbarao: Imperfect imaGANation: implications of GANs exacerbating biases on facial data augmentation and snapchat face lenses (2022)
  4. Jones, Corinne; Roulet, Vincent; Harchaoui, Zaid: Discriminative clustering with representation learning with any ratio of labeled to unlabeled data (2022)
  5. Knoblauch, Andreas: On the antiderivatives of (x^p/(1 - x)) with an application to optimize loss functions for classification with neural networks (2022)
  6. Lakhmiri, Dounia; Le Digabel, Sébastien: Use of static surrogates in hyperparameter optimization (2022)
  7. Lomonaco, Vincenzo; Pellegrini, Lorenzo; Rodriguez, Pau; Caccia, Massimo; She, Qi; Chen, Yu; Jodelet, Quentin; Wang, Ruiping; Mai, Zheda; Vazquez, David; Parisi, German I.; Churamani, Nikhil; Pickett, Marc; Laradji, Issam; Maltoni, Davide: CVPR 2020 continual learning in computer vision competition: approaches, results, current challenges and future directions (2022)
  8. Reiners, Malena; Klamroth, Kathrin; Heldmann, Fabian; Stiglmayr, Michael: Efficient and sparse neural networks by pruning weights in a multiobjective learning approach (2022)
  9. Salti, Mehmet; Kangal, Evrim Ersin: Deep learning of CMB radiation temperature (2022)
  10. Truong, Tuyen Trung; Nguyen, Hang-Tuan: Backtracking gradient descent method and some applications in large scale optimisation. I: theory (2022)
  11. Watanabe, Satoru; Yamana, Hayato: Topological measurement of deep neural networks using persistent homology (2022)
  12. Zhou, Yi; Liang, Yingbin; Zhang, Huishuai: Understanding generalization error of SGD in nonconvex optimization (2022)
  13. Avelin, Benny; Nyström, Kaj: Neural ODEs as the deep limit of ResNets with constant weights (2021)
  14. Bemporad, Alberto; Piga, Dario: Global optimization based on active preference learning with radial basis functions (2021)
  15. Boubekki, Ahcène; Kampffmeyer, Michael; Brefeld, Ulf; Jenssen, Robert: Joint optimization of an autoencoder for clustering and embedding (2021)
  16. Castera, Camille; Bolte, Jérôme; Févotte, Cédric; Pauwels, Edouard: An inertial Newton algorithm for deep learning (2021)
  17. Cauchois, Maxime; Gupta, Suyash; Duchi, John C.: Knowing what you know: valid and validated confidence sets in multiclass and multilabel prediction (2021)
  18. Cheng, Yichen; Wang, Xinlei; Xia, Yusen: Supervised (t)-distributed stochastic neighbor embedding for data visualization and classification (2021)
  19. Chen, Jiyu; Guo, Yiwen; Zheng, Qianjun; Chen, Hao: Protect privacy of deep classification networks by exploiting their generative power (2021)
  20. Cristofari, Andrea; Rinaldi, Francesco: A derivative-free method for structured optimization problems (2021)

1 2 3 ... 7 8 9 next