test_opt


test_opt, a MATLAB code which defines test problems for the scalar function optimization problem.

The scalar function optimization problem is to find a value for the N-dimensional vector X which minimizes the value of the given scalar function F(X). The function F(X) is not usually defined as the sum of squares of other functions. The minimum function value is not guaranteed to be zero.

Any system of M nonlinear functions in N unknowns can be turned into a scalar optimization problem. One way to do this is to define the functional F(X) to be the sum of the squares of the original nonlinear functions. The minimizer of F will then minimize the sum of the squares of the residuals. Since this process involves squaring, it can be less accurate than dealing directly with the original nonlinear functions: that is to say, the derived optimization problem may be more convenient to solve, but might provide less accurate results than applying a nonlinear solver to the original system.

If a function F(X) is differentiable, then at an optimum, the gradient vector must vanish. Thus, it is also possible to start with an optimization problem involving F(X) and turn it into a problem in which we seek a zero of the nonlinear functions represented by the gradient of F. Of course, the gradient must be zero at a mininum, but the converse does not hold; thus unless we know more about F, it is not safe to try to replace the optimization problem by a nonlinear function solution.

For each test problem, routines are provided to evaluate the function, gradient vector, and hessian matrix. Routines are also provided to indicate the number of variables, the problem title, a suitable starting point, and a minimizing solution, if known.

The functions defined include:

  1. The Fletcher-Powell helical valley function,
    N = 3.
  2. The Biggs EXP6 function,
    N = 6.
  3. The Gaussian function,
    N = 3.
  4. The Powell badly scaled function,
    N = 2.
  5. The Box 3-dimensional function,
    N = 3.
  6. The variably dimensioned function,
    1 <= N.
  7. The Watson function,
    2 <= N.
  8. The penalty function #1,
    1 <= N.
  9. The penalty function #2,
    1 <= N.
  10. The Brown badly scaled function,
    N = 2.
  11. The Brown and Dennis function,
    N = 4.
  12. The Gulf R&D function,
    N = 3.
  13. The trigonometric function,
    1 <= N.
  14. The extended Rosenbrock parabolic valley function,
    1 <= N.
  15. The extended Powell singular quartic function,
    4 <= N.
  16. The Beale function,
    N = 2.
  17. The Wood function,
    N = 4.
  18. The Chebyquad function,
    1 <= N.
  19. Leon's cubic valley function,
    N = 2.
  20. Gregory and Karney's Tridiagonal Matrix Function,
    1 <= N.
  21. The Hilbert function,
    1 <= N.
  22. The De Jong Function F1,
    N = 3.
  23. The De Jong Function F2,
    N = 2.
  24. The De Jong Function F3 (discontinuous),
    N = 5.
  25. The De Jong Function F4 (Gaussian noise),
    N = 30.
  26. The De Jong Function F5,
    N = 2.
  27. The Schaffer Function F6,
    N = 2.
  28. The Schaffer Function F7,
    N = 2.
  29. The Goldstein Price Polynomial,
    N = 2.
  30. The Branin RCOS Function,
    N = 2.
  31. The Shekel SQRN5 Function,
    N = 4.
  32. The Shekel SQRN7 Function,
    N = 4.
  33. The Shekel SQRN10 Function,
    N = 4.
  34. The Six-Hump Camel-Back Polynomial,
    N = 2.
  35. The Shubert Function,
    N = 2.
  36. The Stuckman Function,
    N = 2.
  37. The Easom Function,
    N = 2.
  38. The Bohachevsky Function #1,
    N = 2.
  39. The Bohachevsky Function #2,
    N = 2.
  40. The Bohachevsky Function #3,
    N = 2.
  41. The Colville Polynomial,
    N = 4.
  42. The Powell 3D function,
    N = 3.
  43. The Himmelblau function,
    N = 2.

Licensing:

The computer code and data files described and made available on this web page are distributed under the MIT license

Languages:

test_opt is available in a FORTRAN90 version and a MATLAB version.

Related Data and Programs:

asa047, a MATLAB code which minimizes a scalar function of several variables using the Nelder-Mead algorithm.

compass_search, a MATLAB code which seeks the minimizer of a scalar function of several variables using compass search, a direct search algorithm that does not use derivatives.

nelder_mead, a MATLAB code which minimizes a scalar function of several variables using the Nelder-Mead algorithm.

polynomials, a MATLAB code which defines multivariate polynomials over rectangular domains, for which certain information is to be determined, such as the maximum and minimum values.

praxis, a MATLAB code which minimizes a scalar function of several variables, without requiring derivative information, by Richard Brent.

test_opt_test

test_opt_con, a MATLAB code which defines test problems for the minimization of a scalar function of several variables, with the search constrained to lie within a specified hyper-rectangle.

test_optimization, a MATLAB code which defines test problems for the minimization of a scalar function of several variables, as described by Molga and Smutnicki.

Reference:

  1. Evelyn Beale,
    On an Iterative Method for Finding a Local Minimum of a Function of More than One Variable,
    Technical Report 25,
    Statistical Techniques Research Group,
    Princeton University, 1958.
  2. Richard Brent,
    Algorithms for Minimization without Derivatives,
    Dover, 2002,
    ISBN: 0-486-41998-3,
    LC: QA402.5.B74.
  3. John Dennis, David Gay, Phuong Vu,
    A new nonlinear equations test problem,
    Technical Report 83-16,
    Mathematical Sciences Department,
    Rice University, 1983.
  4. John Dennis, Robert Schnabel,
    Numerical Methods for Unconstrained Optimization and Nonlinear Equations,
    SIAM, 1996,
    ISBN13: 978-0-898713-64-0,
    LC: QA402.5.D44.
  5. Noel deVilliers, David Glasser,
    A continuation method for nonlinear regression,
    SIAM Journal on Numerical Analysis,
    Volume 18, 1981, pages 1139-1154.
  6. Chris Fraley,
    Solution of nonlinear least-squares problems,
    Technical Report STAN-CS-1165,
    Computer Science Department,
    Stanford University, 1987.
  7. Chris Fraley,
    Software performance on nonlinear least-squares problems,
    Technical Report SOL 88-17,
    Systems Optimization Laboratory,
    Department of Operations Research,
    Stanford University, 1988.
  8. David Himmelblau,
    Applied Nonlinear Programming,
    McGraw Hill, 1972,
    ISBN13: 978-0070289215,
    LC: T57.8.H55.
  9. A Leon,
    A Comparison of Eight Known Optimizing Procedures,
    in Recent Advances in Optimization Techniques,
    edited by Abraham Lavi, Thomas Vogl,
    Wiley, 1966.
  10. JJ McKeown,
    Specialized versus general-purpose algorithms for functions that are sums of squared terms,
    Mathematical Programming,
    Volume 9, 1975, pages 57-68.
  11. JJ McKeown,
    On algorithms for sums of squares problems,
    in Towards Global Optimization,
    edited by L Dixon, Gabor Szego,
    North-Holland, 1975, pages 229-257.
  12. Zbigniew Michalewicz,
    Genetic Algorithms + Data Structures = Evolution Programs,
    Third Edition,
    Springer, 1996,
    ISBN: 3-540-60676-9,
    LC: QA76.618.M53.
  13. Jorge More, Burton Garbow, Kenneth Hillstrom,
    Testing unconstrained optimization software,
    ACM Transactions on Mathematical Software,
    Volume 7, Number 1, March 1981, pages 17-41.
  14. Jorge More, Burton Garbow, Kenneth Hillstrom,
    Algorithm 566: FORTRAN Subroutines for Testing unconstrained optimization software,
    ACM Transactions on Mathematical Software,
    Volume 7, Number 1, March 1981, pages 136-140.
  15. Michael Powell,
    An Efficient Method for Finding the Minimum of a Function of Several Variables Without Calculating Derivatives,
    Computer Journal,
    Volume 7, Number 2, 1964, pages 155-162.
  16. Douglas Salane,
    A continuation approach for solving large residual nonlinear least squares problems,
    SIAM Journal on Scientific and Statistical Computing,
    Volume 8, 1987, pages 655-671.

Source Code:


Last revised on 30 March 2019.