AdaGrad

ADAGRAD: adaptive gradient algorithm; Adaptive subgradient methods for online learning and stochastic optimization. We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms.


References in zbMATH (referenced in 113 articles , 1 standard article )

Showing results 1 to 20 of 113.
Sorted by year (citations)

1 2 3 4 5 6 next

  1. Barakat, Anas; Bianchi, Pascal: Convergence and dynamical behavior of the ADAM algorithm for nonconvex stochastic optimization (2021)
  2. Duchi, John C.; Ruan, Feng: Asymptotic optimality in stochastic optimization (2021)
  3. Liu, Yang; Roosta, Fred: Convergence of Newton-MR under inexact Hessian information (2021)
  4. Aggarwal, Charu C.: Linear algebra and optimization for machine learning. A textbook (2020)
  5. Akyildiz, Ömer Deniz; Crisan, Dan; Míguez, Joaquín: Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization (2020)
  6. Boffi, Nicholas M.; Slotine, Jean-Jacques E.: A continuous-time analysis of distributed stochastic gradient (2020)
  7. Burkhart, Michael C.; Brandman, David M.; Franco, Brian; Hochberg, Leigh R.; Harrison, Matthew T.: The discriminative Kalman filter for Bayesian filtering with nonlinear and Nongaussian observation models (2020)
  8. Daskalakis, Emmanouil; Herrmann, Felix J.; Kuske, Rachel: Accelerating sparse recovery by reducing chatter (2020)
  9. De, Subhayan; Maute, Kurt; Doostan, Alireza: Bi-fidelity stochastic gradient descent for structural optimization under uncertainty (2020)
  10. Erway, Jennifer B.; Griffin, Joshua; Marcia, Roummel F.; Omheni, Riadh: Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations (2020)
  11. Geng, Zhenglin; Johnson, Daniel; Fedkiw, Ronald: Coercing machine learning to output physically accurate results (2020)
  12. Hu, Jiang; Liu, Xin; Wen, Zai-Wen; Yuan, Ya-Xiang: A brief introduction to manifold optimization (2020)
  13. Jiang, Bo; Lin, Tianyi; Zhang, Shuzhong: A unified adaptive tensor approximation scheme to accelerate composite convex optimization (2020)
  14. Joulani, Pooria; György, András; Szepesvári, Csaba: A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds (2020)
  15. Kissas, Georgios; Yang, Yibo; Hwuang, Eileen; Witschey, Walter R.; Detre, John A.; Perdikaris, Paris: Machine learning in cardiovascular flows modeling: predicting arterial blood pressure from non-invasive 4D flow MRI data using physics-informed neural networks (2020)
  16. Kotłowski, Wojciech: Scale-invariant unconstrained online learning (2020)
  17. Krivorotko, Olga; Kabanikhin, Sergey; Zhang, Shuhua; Kashtanova, Victoriya: Global and local optimization in identification of parabolic systems (2020)
  18. Kylasa, Sudhir; Fang, Chih-Hao; Roosta, Fred; Grama, Ananth: Parallel optimization techniques for machine learning (2020)
  19. Lee, Kookjin; Carlberg, Kevin T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders (2020)
  20. Lei, Lihua; Jordan, Michael I.: On the adaptivity of stochastic gradient-based optimization (2020)

1 2 3 4 5 6 next