RMSprop: Divide the gradient by a running average of its recent magnitude. RmsProp [tieleman2012rmsprop] is an optimizer that utilizes the magnitude of recent gradients to normalize the gradients.
Keywords for this software
References in zbMATH (referenced in 10 articles )
Showing results 1 to 10 of 10.
- Baydin, Atılım Güneş; Pearlmutter, Barak A.; Radul, Alexey Andreyevich; Siskind, Jeffrey Mark: Automatic differentiation in machine learning: a survey (2018)
- Bottou, Léon; Curtis, Frank E.; Nocedal, Jorge: Optimization methods for large-scale machine learning (2018)
- Chan, Shing; Elsheikh, Ahmed H.: A machine learning approach for efficient uncertainty quantification using multiscale methods (2018)
- Fischer, Thomas; Krauss, Christopher: Deep learning with long short-term memory networks for financial market predictions (2018)
- Lee, Seunghye; Ha, Jingwan; Zokhirova, Mehriniso; Moon, Hyeonjoon; Lee, Jaehong: Background information of deep learning for structural engineering (2018)
- Mandt, Stephan; Hoffman, Matthew D.; Blei, David M.: Stochastic gradient descent as approximate Bayesian inference (2017)
- Mohammad Emtiyaz Khan, Zuozhu Liu, Voot Tangkaratt, Yarin Gal: Vprop: Variational Inference using RMSprop (2017) arXiv
- Sainbayar Sukhbaatar, Arthur Szlam, Gabriel Synnaeve, Soumith Chintala, Rob Fergus: MazeBase: A Sandbox for Learning from Games (2015) arXiv
- Schmidhuber, Jürgen: Deep learning in neural networks: an overview (2015) ioport
- Diederik P. Kingma, Jimmy Ba: Adam: A Method for Stochastic Optimization (2014) arXiv