AdaGrad

ADAGRAD: adaptive gradient algorithm; Adaptive subgradient methods for online learning and stochastic optimization. We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms.


References in zbMATH (referenced in 66 articles , 1 standard article )

Showing results 1 to 20 of 66.
Sorted by year (citations)

1 2 3 4 next

  1. Cichosz, Paweł: A case study in text mining of discussion forum posts: classification with bag of words and global vectors (2019)
  2. Heinlein, Alexander; Klawonn, Axel; Lanser, Martin; Weber, Janine: Machine learning in adaptive domain decomposition methods -- predicting the geometric location of constraints (2019)
  3. Holland, Matthew J.; Ikeda, Kazushi: Efficient learning with robust gradient descent (2019)
  4. Hu, Yaohua; Yu, Carisa Kwok Wai; Yang, Xiaoqi: Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions (2019)
  5. Kovachki, Nikola B.; Stuart, Andrew M.: Ensemble Kalman inversion: a derivative-free technique for machine learning tasks (2019)
  6. Luo, Zhijian; Qian, Yuntao: Stochastic sub-sampled Newton method with variance reduction (2019)
  7. Michelioudakis, Evangelos; Artikis, Alexander; Paliouras, Georgios: Semi-supervised online structure learning for composite event recognition (2019)
  8. Milzarek, Andre; Xiao, Xiantao; Cen, Shicong; Wen, Zaiwen; Ulbrich, Michael: A stochastic semismooth Newton method for nonsmooth nonconvex optimization (2019)
  9. Powell, Warren B.: A unified framework for stochastic optimization (2019)
  10. Sun, Tao; Barrio, Roberto; Jiang, Hao; Cheng, Lizhi: Convergence rates of accelerated proximal gradient algorithms under independent noise (2019)
  11. Yang, Shuoguang; Wang, Mengdi; Fang, Ethan X.: Multilevel stochastic gradient methods for nested composition optimization (2019)
  12. Yu, Carisa Kwok Wai; Hu, Yaohua; Yang, Xiaoqi; Choy, Siu Kai: Abstract convergence theorem for quasi-convex optimization problems with applications (2019)
  13. Achab, Massil; Bacry, Emmanuel; Gaïffas, Stéphane; Mastromatteo, Iacopo; Muzy, Jean-François: Uncovering causality from multivariate Hawkes integrated cumulants (2018)
  14. Baydin, Atılım Güneş; Pearlmutter, Barak A.; Radul, Alexey Andreyevich; Siskind, Jeffrey Mark: Automatic differentiation in machine learning: a survey (2018)
  15. Bottou, Léon; Curtis, Frank E.; Nocedal, Jorge: Optimization methods for large-scale machine learning (2018)
  16. Chan, Shing; Elsheikh, Ahmed H.: A machine learning approach for efficient uncertainty quantification using multiscale methods (2018)
  17. Chaudhari, Pratik; Oberman, Adam; Osher, Stanley; Soatto, Stefano; Carlier, Guillaume: Deep relaxation: partial differential equations for optimizing deep neural networks (2018)
  18. Chen, R.; Menickelly, M.; Scheinberg, K.: Stochastic optimization using a trust-region method and random models (2018)
  19. Duchi, John C.; Ruan, Feng: Stochastic methods for composite and weakly convex optimization problems (2018)
  20. Hu, Jiang; Milzarek, Andre; Wen, Zaiwen; Yuan, Yaxiang: Adaptive quadratically regularized Newton method for Riemannian optimization (2018)

1 2 3 4 next