UOBYQA: unconstrained optimization by quadratic approximation A new algorithm for general unconstrained optimization calculations is described. It takes account of the curvature of the objective function by forming quadratic models by interpolation. Obviously, no first derivatives are required. A typical iteration of the algorithm generates a new vector of variables either by minimizing the quadratic model subject to a trust region bound, or by a procedure that should improve the accuracy of the model. The paper addresses the initial positions of the interpolation points and the adjustment of trust region radii. par The algorithm works with the Lagrange functions of the interpolation equations explicitly; therefore their coefficients are updated when an interpolation point is moved. The Lagrange functions assist the procedure that improves the model and also they provide an estimate of the error of the quadratic approximation of the function being minimized. It is pointed out that results are very promising for functions with less than twenty variables.

References in zbMATH (referenced in 52 articles , 1 standard article )

Showing results 1 to 20 of 52.
Sorted by year (citations)

1 2 3 next

  1. Audet, Charles; Ihaddadene, Amina; Le Digabel, Sébastien; Tribes, Christophe: Robust optimization of noisy blackbox problems using the mesh adaptive direct search algorithm (2018)
  2. Chen, R.; Menickelly, M.; Scheinberg, K.: Stochastic optimization using a trust-region method and random models (2018)
  3. Gobbi, Paula E.: Childcare and commitment within households (2018)
  4. He, Xinyu; Hu, Yangzhou; Powell, Warren B.: Optimal learning for nonlinear parametric belief models over multidimensional continuous spaces (2018)
  5. Maggiar, Alvaro; Wächter, Andreas; Dolinskaya, Irina S.; Staum, Jeremy: A derivative-free trust-region algorithm for the optimization of functions smoothed via Gaussian convolution using adaptive multiple importance sampling (2018)
  6. Shashaani, Sara; Hashemi, Fatemeh S.; Pasupathy, Raghu: ASTRO-DF: a class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization (2018)
  7. Zhou, Zhe; Bai, Fusheng: An adaptive framework for costly black-box global optimization based on radial basis function interpolation (2018)
  8. Hare, W.: Compositions of convex functions and fully linear models (2017)
  9. Regis, Rommel G.; Wild, Stefan M.: CONORBIT: constrained optimization by radial basis function interpolation in trust regions (2017)
  10. Verdério, Adriano; Karas, Elizabeth W.; Pedroso, Lucas G.; Scheinberg, Katya: On the construction of quadratic models for derivative-free trust-region algorithms (2017)
  11. Chen, Xiaojun; Kelley, C. T.: Optimization with hidden constraints and embedded Monte Carlo computations (2016)
  12. Larson, Jeffrey; Billups, Stephen C.: Stochastic derivative-free optimization using a trust region framework (2016)
  13. Ni, Qin; Jiang, Cui; Liu, Hao: A new direct search method based on separable fractional interpolation model (2016)
  14. Wang, Jueyu; Zhu, Detong: Conjugate gradient path method without line search technique for derivative-free unconstrained optimization (2016)
  15. Ferreira, Priscila S.; Karas, Elizabeth W.; Sachine, Mael: A globally convergent trust-region algorithm for unconstrained derivative-free optimization (2015)
  16. Yuan, Jinyun; Sampaio, Raimundo; Sun, Wenyu; Zhang, Liang: A wedge trust region method with self-correcting geometry for derivative-free optimization (2015)
  17. Yuan, Ya-xiang: Recent advances in trust region algorithms (2015)
  18. Gumma, E. A. E.; Hashim, M. H. A.; Ali, M. Montaz: A derivative-free algorithm for linearly constrained optimization problems (2014)
  19. Zhang, Zaikun: Sobolev seminorm of quadratic functions with applications to derivative-free optimization (2014)
  20. Zhou, Qinghua; Geng, Yan: Revising two trust region subproblems for unconstrained derivative free methods (2014)

1 2 3 next