Poblano is a Matlab toolbox of large-scale algorithms for unconstrained nonlinear optimization problems. The algorithms in Poblano require only first-order derivative information (e.g., gradients for scalar-valued objective functions), and therefore can scale to very large problems. The driving application for Poblano development has been tensor decompositions in data analysis applications (bibliometric analysis, social network analysis, chemometrics, etc.). Poblano optimizers find local minimizers of scalar-valued objective functions taking vector inputs. The gradient (i.e., first derivative) of the objective function is required for all Poblano optimizers. The optimizers converge to a stationary point where the gradient is approximately zero. A line search satisfying the strong Wolfe conditions is used to guarantee global convergence of the Poblano optimizers. The optimization methods in Poblano include several nonlinear conjugate gradient methods (Fletcher-Reeves, Polak-Ribiere, Hestenes-Stiefel), a limited-memory quasi-Newton method using BFGS updates to approximate second-order derivative information, and a truncated Newton method using finite differences to approximate second-order derivative information.

References in zbMATH (referenced in 12 articles )

Showing results 1 to 12 of 12.
Sorted by year (citations)

  1. De Sterck, Hans; He, Yunhui: On the asymptotic linear convergence speed of Anderson acceleration, Nesterov acceleration, and nonlinear GMRES (2021)
  2. Cipolla, S.; Di Fiore, C.; Zellini, P.: A variation of Broyden class methods using Householder adaptive transforms (2020)
  3. Mitchell, Drew; Ye, Nan; De Sterck, Hans: Nesterov acceleration of alternating least squares for canonical tensor decomposition: momentum step size selection and restart mechanisms. (2020)
  4. Cipolla, Stefano; Durastante, Fabio: Fractional PDE constrained optimization: an optimize-then-discretize approach with L-BFGS and approximate inverse preconditioning (2018)
  5. Wang, Yuepeng; Cheng, Yue; Navon, I. Michael; Guan, Yuanhong: Parameter identification techniques applied to an environmental pollution model (2018)
  6. De Sterck, Hans; Howse, Alexander: Nonlinearly preconditioned optimization on Grassmann manifolds for computing approximate Tucker tensor decompositions (2016)
  7. Rao, Vishwas; Sandu, Adrian: A time-parallel approach to strong-constraint four-dimensional variational data assimilation (2016)
  8. Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul: Preserving Lagrangian structure in nonlinear model reduction with application to structural dynamics (2015)
  9. De Sterck, Hans; Winlaw, Manda: A nonlinearly preconditioned conjugate gradient algorithm for rank-(R) canonical tensor approximation. (2015)
  10. Filipović, Marko; Jukić, Ante: Tucker factorization with missing data with application to low-(n)-rank tensor completion (2015)
  11. De Sterck, Hans: Steepest descent preconditioning for nonlinear GMRES optimization. (2013)
  12. De Sterck, H.: A nonlinear GMRES optimization algorithm for canonical tensor decomposition (2012)