UOBYQA
UOBYQA: unconstrained optimization by quadratic approximation. A new algorithm for general unconstrained optimization calculations is described. It takes account of the curvature of the objective function by forming quadratic models by interpolation. Obviously, no first derivatives are required. A typical iteration of the algorithm generates a new vector of variables either by minimizing the quadratic model subject to a trust region bound, or by a procedure that should improve the accuracy of the model. The paper addresses the initial positions of the interpolation points and the adjustment of trust region radii. par The algorithm works with the Lagrange functions of the interpolation equations explicitly; therefore their coefficients are updated when an interpolation point is moved. The Lagrange functions assist the procedure that improves the model and also they provide an estimate of the error of the quadratic approximation of the function being minimized. It is pointed out that results are very promising for functions with less than twenty variables.
Keywords for this software
References in zbMATH (referenced in 65 articles , 1 standard article )
Showing results 1 to 20 of 65.
Sorted by year (- Braun, Phillip; Hare, Warren; Jarry-Bolduc, Gabriel: Limiting behavior of derivative approximation techniques as the number of points tends to infinity on a fixed interval in (\mathbbR) (2021)
- Amos, Brandon D.; Easterling, David R.; Watson, Layne T.; Thacker, William I.; Castle, Brent S.; Trosset, Michael W.: Algorithm 1007: QNSTOP -- quasi-Newton algorithm for stochastic optimization (2020)
- Dai, Yu-Hong; Jarre, Florian; Lieder, Felix: On the existence of affine invariant descent directions (2020)
- Hare, Warren: A discussion on variational analysis in derivative-free optimization (2020)
- Manno, Andrea; Amaldi, Edoardo; Casella, Francesco; Martelli, Emanuele: A local search method for costly black-box problems and its application to CSP plant start-up optimization refinement (2020)
- Sauk, Benjamin; Ploskas, Nikolaos; Sahinidis, Nikolaos: GPU parameter tuning for tall and skinny dense linear least squares problems (2020)
- Ahmadvand, Mohammad; Esmaeilbeigi, Mohsen; Kamandi, Ahmad; Yaghoobi, Farajollah Mohammadi: An improved hybrid-ORBIT algorithm based on point sorting and MLE technique (2019)
- Berahas, Albert S.; Byrd, Richard H.; Nocedal, Jorge: Derivative-free optimization of noisy functions via quasi-Newton methods (2019)
- Cartis, Coralia; Roberts, Lindon: A derivative-free Gauss-Newton method (2019)
- Larson, Jeffrey; Menickelly, Matt; Wild, Stefan M.: Derivative-free optimization methods (2019)
- Audet, Charles; Ihaddadene, Amina; Le Digabel, Sébastien; Tribes, Christophe: Robust optimization of noisy blackbox problems using the mesh adaptive direct search algorithm (2018)
- Chen, R.; Menickelly, M.; Scheinberg, K.: Stochastic optimization using a trust-region method and random models (2018)
- Gobbi, Paula E.: Childcare and commitment within households (2018)
- He, Xinyu; Hu, Yangzhou; Powell, Warren B.: Optimal learning for nonlinear parametric belief models over multidimensional continuous spaces (2018)
- Maggiar, Alvaro; Wächter, Andreas; Dolinskaya, Irina S.; Staum, Jeremy: A derivative-free trust-region algorithm for the optimization of functions smoothed via Gaussian convolution using adaptive multiple importance sampling (2018)
- Shashaani, Sara; Hashemi, Fatemeh S.; Pasupathy, Raghu: ASTRO-DF: a class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization (2018)
- Zhou, Zhe; Bai, Fusheng: An adaptive framework for costly black-box global optimization based on radial basis function interpolation (2018)
- C. Cartis; L. Roberts: A Derivative-Free Gauss-Newton Method (2017) arXiv
- Hare, W.: Compositions of convex functions and fully linear models (2017)
- Huang, Yunqing; Jiang, Kai: Hill-climbing algorithm with a stick for unconstrained optimization problems (2017)