RandNLA

RandNLA: Randomized Numerical Linear Algebra. Matrices are ubiquitous in computer science, statistics, and applied mathematics. An m × n matrix can encode information about m objects (each described by n features), or the behavior of a discretized differential operator on a finite element mesh; an n × n positive-definite matrix can encode the correlations between all pairs of n objects, or the edge-connectivity between all pairs of nodes in a social network; and so on. Motivated largely by technological developments that generate extremely large scientific and Internet datasets, recent years have witnessed exciting developments in the theory and practice of matrix algorithms. Particularly remarkable is the use of randomizationtypically assumed to be a property of the input data due to, for example, noise in the data generation mechanismsas an algorithmic or computational resource for the development of improved algorithms for fundamental matrix problems such as matrix multiplication, least-squares (LS) approximation, low-rank matrix approximation, and Laplacian-based linear equation solvers.


References in zbMATH (referenced in 22 articles )

Showing results 1 to 20 of 22.
Sorted by year (citations)

1 2 next

  1. Ma, Anna; Molitor, Denali: Randomized Kaczmarz for tensor linear systems (2022)
  2. Che, Maolin; Wei, Yimin; Yan, Hong: An efficient randomized algorithm for computing the approximate Tucker decomposition (2021)
  3. Che, Maolin; Wei, Yimin; Yan, Hong: Randomized algorithms for the low multilinear rank approximations of tensors (2021)
  4. Kalantzis, Vassilis; Xi, Yuanzhe; Horesh, Lior: Fast randomized non-Hermitian eigensolvers based on rational filtering and matrix partitioning (2021)
  5. Xiao, Chuanfu; Yang, Chao; Li, Min: Efficient alternating least squares algorithms for low multilinear rank approximation of tensors (2021)
  6. Zhang, Tao; Ning, Yang; Ruppert, David: Optimal sampling for generalized linear models under measurement constraints (2021)
  7. Casey, Michael P.: Linear dimension reduction approximately preserving a function of the $1$-norm (2020)
  8. Che, Maolin; Wei, Yimin; Yan, Hong: The computation of low multilinear rank approximations of tensors via power scheme and random projection (2020)
  9. Chen, Ting-Li; Huang, Su-Yun; Wang, Weichung: A consistency theorem for randomized singular value decomposition (2020)
  10. Erichson, N. Benjamin; Zheng, Peng; Manohar, Krithika; Brunton, Steven L.; Kutz, J. Nathan; Aravkin, Aleksandr Y.: Sparse principal component analysis via variable projection (2020)
  11. Minster, Rachel; Saibaba, Arvind K.; Kilmer, Misha E.: Randomized algorithms for low-rank tensor decompositions in the Tucker format (2020)
  12. Saibaba, Arvind K.: Randomized discrete empirical interpolation method for nonlinear model reduction (2020)
  13. Xu, Peng; Roosta, Fred; Mahoney, Michael W.: Newton-type methods for non-convex optimization under inexact Hessian information (2020)
  14. Alla, Alessandro; Kutz, J. Nathan: Randomized model order reduction (2019)
  15. Chowdhury, Agniva; Yang, Jiasen; Drineas, Petros: Structural conditions for projection-cost preservation via randomized matrix multiplication (2019)
  16. Drineas, Petros; Ipsen, Ilse C. F.: Low-rank matrix approximations do not need a singular value gap (2019)
  17. Erichson, N. Benjamin; Mathelin, Lionel; Kutz, J. Nathan; Brunton, Steven L.: Randomized dynamic mode decomposition (2019)
  18. Renaut, Rosemary A.; Helmstetter, Anthony W.; Vatankhah, Saeed: Unbiased predictive risk estimation of the Tikhonov regularization parameter: convergence with increasing rank approximations of the singular value decomposition (2019)
  19. Buhr, Andreas; Smetana, Kathrin: Randomized local model order reduction (2018)
  20. Drineas, Petros; Ipsen, Ilse C. F.; Kontopoulou, Eugenia-Maria; Magdon-Ismail, Malik: Structural convergence results for approximation of dominant subspaces from block Krylov spaces (2018)

1 2 next