THE MNIST DATABASE of handwritten digits. The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image. It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting.

References in zbMATH (referenced in 304 articles )

Showing results 1 to 20 of 304.
Sorted by year (citations)

1 2 3 ... 14 15 16 next

  1. An, Baiguo; Feng, Guozhong; Guo, Jianhua: Interaction identification and clique screening for classification with ultra-high dimensional discrete features (2022)
  2. Atashgahi, Zahra; Sokar, Ghada; van der Lee, Tim; Mocanu, Elena; Mocanu, Decebal Constantin; Veldhuis, Raymond; Pechenizkiy, Mykola: Quick and robust feature selection: the strength of energy-efficient sparse training for autoencoders (2022)
  3. Chen, Qipin; Hao, Wenrui; He, Juncai: A weight initialization based on the linear product structure for neural networks (2022)
  4. Chen, Tyler; Greenbaum, Anne; Musco, Cameron; Musco, Christopher: Error bounds for Lanczos-based matrix function approximation (2022)
  5. Douek-Pinkovich, Yifat; Ben-Gal, Irad; Raviv, Tal: The stochastic test collection problem: models, exact and heuristic solution approaches (2022)
  6. Flores, Mauricio; Calder, Jeff; Lerman, Gilad: Analysis and algorithms for (\ell_p)-based semi-supervised learning on graphs (2022)
  7. Fuji, Terunari; Poirion, Pierre-Louis; Takeda, Akiko: Convexification with bounded gap for randomly projected quadratic optimization (2022)
  8. Grabovoy, A. V.; Strijov, V. V.: Probabilistic interpretation of the distillation problem (2022)
  9. Gürbüzbalaban, Mert; Ruszczyński, Andrzej; Zhu, Landi: A stochastic subgradient method for distributionally robust non-convex and non-smooth learning (2022)
  10. Holden, Matthew; Pereyra, Marcelo; Zygalakis, Konstantinos C.: Bayesian imaging with data-driven priors encoded by neural networks (2022)
  11. Khouja, Rima; Mattei, Pierre-Alexandre; Mourrain, Bernard: Tensor decomposition for learning Gaussian mixtures from moments (2022)
  12. Lakhmiri, Dounia; Le Digabel, Sébastien: Use of static surrogates in hyperparameter optimization (2022)
  13. Li, Boyue; Li, Zhize; Chi, Yuejie: Destress: Computation-optimal and communication-efficient decentralized nonconvex finite-sum optimization (2022)
  14. Lindeberg, Tony: Scale-covariant and scale-invariant Gaussian derivative networks (2022)
  15. Li, Shao-Yuan; Shi, Ye; Huang, Sheng-Jun; Chen, Songcan: Improving deep label noise learning with dual active label correction (2022)
  16. Liu, Haoran; Xiong, Haoyi; Wang, Yaqing; An, Haozhe; Dou, Dejing; Wu, Dongrui: Exploring the common principal subspace of deep features in neural networks (2022)
  17. Park, Seonho; Adosoglou, George; Pardalos, Panos M.: Interpreting rate-distortion of variational autoencoder and using model uncertainty for anomaly detection (2022)
  18. Pfannschmidt, Karlson; Gupta, Pritha; Haddenhorst, Björn; Hüllermeier, Eyke: Learning context-dependent choice functions (2022)
  19. Reiners, Malena; Klamroth, Kathrin; Heldmann, Fabian; Stiglmayr, Michael: Efficient and sparse neural networks by pruning weights in a multiobjective learning approach (2022)
  20. Rudin, Cynthia; Chen, Chaofan; Chen, Zhi; Huang, Haiyang; Semenova, Lesia; Zhong, Chudi: Interpretable machine learning: fundamental principles and 10 grand challenges (2022)

1 2 3 ... 14 15 16 next