Saga

SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives. In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method.


References in zbMATH (referenced in 104 articles )

Showing results 1 to 20 of 104.
Sorted by year (citations)

1 2 3 4 5 6 next

  1. Jin, Bangti; Zhou, Zehui; Zou, Jun: An analysis of stochastic variance reduced gradient for linear inverse problems (2022)
  2. Sha, Xingyu; Zhang, Jiaqi; You, Keyou; Zhang, Kaiqing; Başar, Tamer: Fully asynchronous policy evaluation in distributed reinforcement learning over networks (2022)
  3. Xin, Ran; Khan, Usman A.; Kar, Soummya: Fast decentralized nonconvex finite-sum optimization with recursive variance reduction (2022)
  4. Alacaoglu, Ahmet; Malitsky, Yura; Cevher, Volkan: Forward-reflected-backward method with variance reduction (2021)
  5. Belomestny, Denis; Iosipoi, Leonid; Moulines, Eric; Naumov, Alexey; Samsonov, Sergey: Variance reduction for dependent sequences with applications to stochastic gradient MCMC (2021)
  6. Benning, Martin; Betcke, Marta M.; Ehrhardt, Matthias J.; Schönlieb, Carola-Bibiane: Choose your path wisely: gradient descent in a Bregman distance framework (2021)
  7. Bian, Fengmiao; Liang, Jingwei; Zhang, Xiaoqun: A stochastic alternating direction method of multipliers for non-smooth and non-convex optimization (2021)
  8. Borovykh, A.; Kantas, N.; Parpas, P.; Pavliotis, G. A.: On stochastic mirror descent with interacting particles: convergence properties and variance reduction (2021)
  9. Chen, Chenxi; Chen, Yunmei; Ye, Xiaojing: A randomized incremental primal-dual method for decentralized consensus optimization (2021)
  10. Cui, Shisheng; Shanbhag, Uday V.: On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems (2021)
  11. Driggs, Derek; Tang, Junqi; Liang, Jingwei; Davies, Mike; Schönlieb, Carola-Bibiane: A stochastic proximal alternating minimization for nonsmooth and nonconvex optimization (2021)
  12. Duchi, John C.; Glynn, Peter W.; Namkoong, Hongseok: Statistics of robust optimization: a generalized empirical likelihood approach (2021)
  13. Duchi, John C.; Ruan, Feng: Asymptotic optimality in stochastic optimization (2021)
  14. Gower, Robert M.; Richtárik, Peter; Bach, Francis: Stochastic quasi-gradient methods: variance reduction via Jacobian sketching (2021)
  15. Gu, Bin; Wei, Xiyuan; Gao, Shangqian; Xiong, Ziran; Deng, Cheng; Huang, Heng: Black-box reductions for zeroth-order gradient algorithms to achieve lower query complexity (2021)
  16. Gürbüzbalaban, M.; Ozdaglar, A.; Parrilo, P. A.: Why random reshuffling beats stochastic gradient descent (2021)
  17. Hanzely, Filip; Richtárik, Peter: Fastest rates for stochastic mirror descent methods (2021)
  18. Hendrikx, Hadrien; Bach, Francis; Massoulié, Laurent: An optimal algorithm for decentralized finite-sum optimization (2021)
  19. Hu, Bin; Seiler, Peter; Lessard, Laurent: Analysis of biased stochastic gradient descent using sequential semidefinite programs (2021)
  20. Jalilzadeh, Afrooz: Primal-dual incremental gradient method for nonsmooth and convex optimization problems (2021)

1 2 3 4 5 6 next