KELLEY

Iterative methods for optimization This book gives an introduction to optimization methods for unconstrained and bound constrained minimization problems. The style of the book is probably best described by the following quote from the book’s preface: `dots{} we treat a small number of methods in depth, giving less detailed description of only a few [dots{}]. We aim for clarity and brevity rather than complete generality and confine our scope to algorithms that are easy to implement (by the reader!) and understand.’ par This book is partitioned into two parts. The first part, occupying approximately 100 pages, is devoted to the optimization of smooth functions. The methods studied in this first part rely on the availability and accuracy of first order, and sometimes also second order derivatives of the objective function. The first part contains five chapters. The first chapter provides basic concepts. It also introduces a parameter identification problem and a discretized optimal control problem, both of which are used to demonstrate all methods discussed in the first part. Chapter 2 studies the local convergence of Newton’s method, inexact Newton methods, and the Gauss-Newton method for the solution of nonlinear least squares problems. Both, overdetermined and underdetermined nonlinear least squares problems are considered. Chapter 3 is devoted to line-search and trust-region methods, which are used to globalize convergence, i.e., remove the restriction that the starting point of the optimization iteration is sufficiently close to a solution. par The BFGS method is studied in chapter 4. A local convergence analysis is provided and implementation details are discussed. Other quasi-Newton methods are sketched. The last chapter of the first part, chapter 5, studies projection methods for the solution of bound constrained problems. All chapters conclude with a demonstration of the methods discussed in the respective chapter using the parameter identification problem and the discretized optimal control problem introduced in chapter 1, and with a set of exercises. par The second part of the book, which is approximately 50 pages long, deals with the optimization of noisy functions. Such optimization problems arise, e.g., when the evaluation of the objective function involves computer simulations. In such cases the noise often introduces artificial minimizers. Gradient information, even if available, cannot expected to be reliable. This second part contains three chapters. The first chapter provides a discussion of noisy functions, basics concepts, and three simple examples that are later used to demonstrate the behavior of optimization algorithms. Chapter 7 introduces implicit filtering, a technique due to the author and his group. Implicit filtering methods use finite difference approximations of the gradient, which are adjusted to the noise level in the function. Direct search algorithms, including the Nelder-Mead, multidirectional search, and the Hooke-Jeves algorithms are discussed in Chapter 8. Again, the latter two chapters conclude with a numerical demonstration of the methods discussed in the respective chapter, and with a set of exercises. par The treatment of both, optimization methods for smooth and for noisy functions is a unique feature of this book. Matlab implementations of all algorithms discussed in this book are easily accessible from the author’s or the publisher’s web-page.


References in zbMATH (referenced in 556 articles )

Showing results 521 to 540 of 556.
Sorted by year (citations)

previous 1 2 3 ... 25 26 27 28 next

  1. Jay, Laurent O.: Inexact simplified Newton iterations for implicit Runge-Kutta methods (2000)
  2. Karátson, J.: Gradient method in Sobolev spaces for nonlocal boundary-value problems (2000)
  3. Martínez, José Mario: Practical quasi-Newton methods for solving nonlinear systems (2000)
  4. Ng, Michael K.; Plemmons, Robert J.; Pimentel, Felipe: A new approach to constrained total least squares image restoration (2000)
  5. Nievergelt, Yves: A tutorial history of least squares with applications to astronomy and geodesy (2000)
  6. Shih, Yin-Tzer; Elman, Howard C.: Iterative methods for stabilized discrete convection-diffusion problems (2000)
  7. Smith, I. M.: A general purpose system for finite element analyses in parallel (2000)
  8. Sommariva, Alvise; Vianello, Marco: Computing positive fixed-points of decreasing Hammerstein operators by relaxed iterations (2000)
  9. Wheeler, Mary F.; Yotov, Ivan: Multigrid on the interface for mortar mixed finite element methods for elliptic problems (2000)
  10. Wiegmann, Andreas; Bube, Kenneth P.: The explicit-jump immersed interface method: Finite difference methods for PDEs with piecewise smooth solutions (2000)
  11. Wright, Margaret H.: What, if anything, is new in optimization? (2000)
  12. Yamamoto, Tetsuro: Historical developments in convergence analysis for Newton’s and Newton-like methods (2000)
  13. Banoczi, J. M.; Kelley, C. T.: A fast multilevel algorithm for the solution of nonlinear systems of conductive-radiative heat transfer equations in two space dimensions (1999)
  14. Booth, Michael J.; Schlijper, A. G.; Scales, L. E.; Haymet, A. D. J.: Efficient solution of liquid state integral equations using the Newton-GMRES algorithm (1999)
  15. Chan, Tony F.; Golub, Gene H.; Mulet, Pep: A nonlinear primal-dual method for total variation-based image restoration (1999)
  16. Frauendiener, Jörg: Calculating initial data for the conformal Einstein equations by pseudo-spectral methods (1999)
  17. Ganesh, M.; Steinbach, O.: Nonlinear boundary integral equations for harmonic problems (1999)
  18. Kelley, C. T.: Iterative methods for optimization (1999)
  19. Kelley, C. T.: Detection and remediation of stagnation in the Nelder-Mead algorithm using a sufficient decrease condition (1999)
  20. Kelley, C. T.; Sachs, E. W.: A trust region method for parabolic boundary control problems (1999)

previous 1 2 3 ... 25 26 27 28 next