Parallel stochastic gradient algorithms for large-scale matrix completion. This paper develops Jellyfish, an algorithm for solving data-processing problems with matrix-valued decision variables regularized to have low rank. Particular examples of problems solvable by Jellyfish include matrix completion problems and least-squares problems regularized by the nuclear norm or $gamma _2$-norm. Jellyfish implements a projected incremental gradient method with a biased, random ordering of the increments. This biased ordering allows for a parallel implementation that admits a speed-up nearly proportional to the number of processors. On large-scale matrix completion tasks, Jellyfish is orders of magnitude more efficient than existing codes. For example, on the Netflix Prize data set, prior art computes rating predictions in approximately 4 h, while Jellyfish solves the same problem in under 3 min on a 12 core workstation.

References in zbMATH (referenced in 25 articles )

Showing results 1 to 20 of 25.
Sorted by year (citations)

1 2 next

  1. Boffi, Nicholas M.; Slotine, Jean-Jacques E.: A continuous-time analysis of distributed stochastic gradient (2020)
  2. Godichon-Baggioni, Antoine; Saadane, Sofiane: On the rates of convergence of parallelized averaged stochastic gradient algorithms (2020)
  3. Sun, Ruoyu; Luo, Zhi-Quan; Ye, Yinyu: On the efficiency of random permutation for ADMM and coordinate descent (2020)
  4. Yu, Ming; Gupta, Varun; Kolar, Mladen: Recovery of simultaneous low rank and two-way sparse coefficient matrices, a nonconvex approach (2020)
  5. Gürbüzbalaban, M.; Ozdaglar, A.; Parrilo, P. A.: Convergence rate of incremental gradient and incremental Newton methods (2019)
  6. Kumar, Rajiv; Willemsen, Bram; Herrmann, Felix J.; Malcolm, Alison: Enabling numerically exact local solver for waveform inversion -- a low-rank approach (2019)
  7. Mishra, Bamdev; Kasai, Hiroyuki; Jawanpuria, Pratik; Saroop, Atul: A Riemannian gossip approach to subspace learning on Grassmann manifold (2019)
  8. Yang, Shuoguang; Wang, Mengdi; Fang, Ethan X.: Multilevel stochastic gradient methods for nested composition optimization (2019)
  9. Cottet, Vincent; Alquier, Pierre: 1-bit matrix completion: PAC-Bayesian analysis of a variational approximation (2018)
  10. Yin, Penghang; Xin, Jack; Qi, Yingyong: Linear feature transform and enhancement of classification on deep neural network (2018)
  11. Ashraphijuo, Morteza; Wang, Xiaodong; Aggarwal, Vaneet: Rank determination for low-rank data completion (2017)
  12. Boyd, Nicholas; Schiebinger, Geoffrey; Recht, Benjamin: The alternating descent conditional gradient method for sparse inverse problems (2017)
  13. Bhaskar, Sonia A.: Probabilistic low-rank matrix completion from quantized measurements (2016)
  14. Tappenden, Rachael; Richtárik, Peter; Gondzio, Jacek: Inexact coordinate descent: complexity and preconditioning (2016)
  15. Boumal, Nicolas; Absil, P.-A.: Low-rank matrix completion via preconditioned optimization on the Grassmann manifold (2015)
  16. Harchaoui, Zaid; Juditsky, Anatoli; Nemirovski, Arkadi: Conditional gradient algorithms for norm-regularized smooth convex optimization (2015)
  17. Lin, An-Ya; Ling, Qing: Decentralized and privacy-preserving low-rank matrix completion (2015)
  18. Mackey, Lester; Talwalkar, Ameet; Jordan, Michael I.: Distributed matrix completion and robust factorization (2015)
  19. Mai, The Tien; Alquier, Pierre: A Bayesian approach for noisy matrix completion: optimal rate under general sampling distribution (2015)
  20. Mankad, Shawn; Michailidis, George: Analysis of multiview legislative networks with structured matrix factorization: does Twitter influence translate to the real world? (2015)

1 2 next