DFO

DFO is a Fortran package for solving general nonlinear optimization problems that have the following characteristics: they are relatively small scale (less than 100 variables), their objective function is relatively expensive to compute and derivatives of such functions are not available and cannot be estimated efficiently. There also may be some noise in the function evaluation procedures. Such optimization problems arise ,for example, in engineering design, where the objective function evaluation is a simulation package treated as a black box.


References in zbMATH (referenced in 127 articles , 1 standard article )

Showing results 41 to 60 of 127.
Sorted by year (citations)
  1. Xue, Dan; Sun, Wenyu: On convergence analysis of a derivative-free trust region algorithm for constrained optimization with separable structure (2014)
  2. Zhang, Zaikun: Sobolev seminorm of quadratic functions with applications to derivative-free optimization (2014)
  3. Zhou, Qinghua; Geng, Yan: Revising two trust region subproblems for unconstrained derivative free methods (2014)
  4. Conejo, P. D.; Karas, E. W.; Pedroso, L. G.; Ribeiro, A. A.; Sachine, M.: Global convergence of trust-region algorithms for convex constrained minimization without derivatives (2013)
  5. Martínez, J. M.; Sobral, F. N. C.: Constrained derivative-free optimization on thin domains (2013)
  6. Rios, Luis Miguel; Sahinidis, Nikolaos V.: Derivative-free optimization: a review of algorithms and comparison of software implementations (2013)
  7. Zhao, Hui; Li, Gaoming; Reynolds, Albert C.; Yao, Jun: Large-scale history matching with quadratic interpolation models (2013)
  8. Bandeira, Michael Martin A. S.; Scheinberg, K.; Vicente, L. N.: Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization (2012)
  9. Powell, M. J. D.: On the convergence of trust region algorithms for unconstrained minimization without derivatives (2012)
  10. Zhang, Hongchao; Conn, Andrew R.: On the local convergence of a derivative-free algorithm for least-squares minimization (2012)
  11. Zhang, Lipu; Xu, Yinghong; Liu, Yousong: An elite decision making harmony search algorithm for optimization problem (2012)
  12. Arouxét, Ma. Belén; Echebest, Nélida; Pilotta, Elvio A.: Active-set strategy in Powell’s method for optimization without derivatives (2011)
  13. Gratton, Serge; Toint, Philippe L.; Tröltzsch, Anke: An active-set trust-region method for derivative-free nonlinear bound-constrained optimization (2011)
  14. Jaberipour, Majid; Khorram, Esmaile; Karimi, Behrooz: Particle swarm algorithm for solving systems of nonlinear equations (2011)
  15. Liu, Qunfeng: Two minimal positive bases based direct search conjugate gradient methods for computationally expensive functions (2011)
  16. Regis, Rommel G.: Stochastic radial basis function algorithms for large-scale optimization involving expensive black-box objective and constraint functions (2011)
  17. Scott, Warren; Frazier, Peter; Powell, Warren: The correlated knowledge gradient for simulation optimization of continuous parameters using Gaussian process regression (2011)
  18. Wild, Stefan M.; Shoemaker, Christine: Global convergence of radial basis function trust region derivative-free algorithms (2011)
  19. Auger, Anne; Teytaud, Olivier: Continuous lunches are free plus the design of optimal optimization algorithms (2010)
  20. Colson, Benoît; Bruyneel, Michaël; Grihon, Stéphane; Raick, Caroline; Remouchamps, Alain: Optimization methods for advanced design of aircraft panels: a comparison (2010)