test_opt
test_opt,
a Python code which
defines test problems for the scalar function optimization problem.
The scalar function optimization problem is to find a value
for the N-dimensional vector X which minimizes the value of
the given scalar function F(X). The function F(X) is
not usually defined as the sum of squares of other functions.
The minimum function value is not guaranteed to be zero.
Any system of M nonlinear functions in N unknowns can be turned into
a scalar optimization problem. One way to do this is to define the functional
F(X) to be the sum of the squares of the original nonlinear functions.
The minimizer of F will then minimize the sum of the squares of the
residuals. Since this process involves squaring, it can be less accurate
than dealing directly with the original nonlinear functions: that is to say,
the derived optimization problem may be more convenient to solve, but might
provide less accurate results than applying a nonlinear solver to the original
system.
If a function F(X) is differentiable, then at an optimum, the
gradient vector must vanish. Thus, it is also possible to start with an
optimization problem involving F(X) and turn it into a problem in
which we seek a zero of the nonlinear functions represented by the gradient
of F. Of course, the gradient must be zero at a mininum, but
the converse does not hold; thus unless we know more about F, it is not
safe to try to replace the optimization problem by a nonlinear function
solution.
For each test problem, routines are provided to evaluate the function,
gradient vector, and hessian matrix. Routines are also provided to
indicate the number of variables, the problem title, a suitable starting
point, and a minimizing solution, if known.
The functions defined include:
-
The Fletcher-Powell helical valley function,
N = 3.
-
The Biggs EXP6 function,
N = 6.
-
The Gaussian function,
N = 3.
-
The Powell badly scaled function,
N = 2.
-
The Box 3-dimensional function,
N = 3.
-
The variably dimensioned function,
1 <= N.
-
The Watson function,
2 <= N.
-
The penalty function #1,
1 <= N.
-
The penalty function #2,
1 <= N.
-
The Brown badly scaled function,
N = 2.
-
The Brown and Dennis function,
N = 4.
-
The Gulf R&D function,
N = 3.
-
The trigonometric function,
1 <= N.
-
The extended Rosenbrock parabolic valley function,
1 <= N.
-
The extended Powell singular quartic function,
4 <= N.
-
The Beale function,
N = 2.
-
The Wood function,
N = 4.
-
The Chebyquad function,
1 <= N.
-
Leon's cubic valley function,
N = 2.
-
Gregory and Karney's Tridiagonal Matrix Function,
1 <= N.
-
The Hilbert function,
1 <= N.
-
The De Jong Function F1,
N = 3.
-
The De Jong Function F2,
N = 2.
-
The De Jong Function F3 (discontinuous),
N = 5.
-
The De Jong Function F4 (Gaussian noise),
N = 30.
-
The De Jong Function F5,
N = 2.
-
The Schaffer Function F6,
N = 2.
-
The Schaffer Function F7,
N = 2.
-
The Goldstein Price Polynomial,
N = 2.
-
The Branin RCOS Function,
N = 2.
-
The Shekel SQRN5 Function,
N = 4.
-
The Shekel SQRN7 Function,
N = 4.
-
The Shekel SQRN10 Function,
N = 4.
-
The Six-Hump Camel-Back Polynomial,
N = 2.
-
The Shubert Function,
N = 2.
-
The Stuckman Function,
N = 2.
-
The Easom Function,
N = 2.
-
The Bohachevsky Function #1,
N = 2.
-
The Bohachevsky Function #2,
N = 2.
-
The Bohachevsky Function #3,
N = 2.
-
The Colville Polynomial,
N = 4.
-
The Powell 3D function,
N = 3.
-
The Himmelblau function,
N = 2.
Licensing:
The information on this web page is distributed under the MIT license.
Languages:
test_opt is available in
a Fortran90 version and
a MATLAB version and
an Octave version and
a Python version.
Related Data and Programs:
asa047,
a Python code which
minimizes a scalar function of several variables using the Nelder-Mead
algorithm.
compass_search,
a Python code which
seeks the minimizer of a scalar function of several variables
using compass search, a direct search algorithm that
does not use derivatives.
polynomials,
a Python code which
defines multivariate polynomials over rectangular domains, for
which certain information is to be determined, such as the maximum
and minimum values.
praxis,
a Python code which
minimizes a scalar function of several variables, without
requiring derivative information,
by Richard Brent.
test_opt_con,
a Python code which
defines test problems for the minimization of a scalar function
of several variables, with the search constrained
to lie within a specified hyper-rectangle.
test_optimization,
a Python code which
defines test problems for the minimization of a scalar function
of several variables, as described by Molga and Smutnicki.
Reference:
-
Evelyn Beale,
On an Iterative Method for Finding a Local Minimum of a Function
of More than One Variable,
Technical Report 25,
Statistical Techniques Research Group,
Princeton University, 1958.
-
Richard Brent,
Algorithms for Minimization without Derivatives,
Dover, 2002,
ISBN: 0-486-41998-3,
LC: QA402.5.B74.
-
John Dennis, David Gay, Phuong Vu,
A new nonlinear equations test problem,
Technical Report 83-16,
Mathematical Sciences Department,
Rice University, 1983.
-
John Dennis, Robert Schnabel,
Numerical Methods for Unconstrained Optimization
and Nonlinear Equations,
SIAM, 1996,
ISBN13: 978-0-898713-64-0,
LC: QA402.5.D44.
-
Noel deVilliers, David Glasser,
A continuation method for nonlinear regression,
SIAM Journal on Numerical Analysis,
Volume 18, 1981, pages 1139-1154.
-
Chris Fraley,
Solution of nonlinear least-squares problems,
Technical Report STAN-CS-1165,
Computer Science Department,
Stanford University, 1987.
-
Chris Fraley,
Software performance on nonlinear least-squares problems,
Technical Report SOL 88-17,
Systems Optimization Laboratory,
Department of Operations Research,
Stanford University, 1988.
-
David Himmelblau,
Applied Nonlinear Programming,
McGraw Hill, 1972,
ISBN13: 978-0070289215,
LC: T57.8.H55.
-
A Leon,
A Comparison of Eight Known Optimizing Procedures,
in Recent Advances in Optimization Techniques,
edited by Abraham Lavi, Thomas Vogl,
Wiley, 1966.
-
JJ McKeown,
Specialized versus general-purpose algorithms for functions
that are sums of squared terms,
Mathematical Programming,
Volume 9, 1975, pages 57-68.
-
JJ McKeown,
On algorithms for sums of squares problems,
in Towards Global Optimization,
edited by L Dixon, Gabor Szego,
North-Holland, 1975, pages 229-257.
-
Zbigniew Michalewicz,
Genetic Algorithms + Data Structures = Evolution Programs,
Third Edition,
Springer, 1996,
ISBN: 3-540-60676-9,
LC: QA76.618.M53.
-
Jorge More, Burton Garbow, Kenneth Hillstrom,
Testing unconstrained optimization software,
ACM Transactions on Mathematical Software,
Volume 7, Number 1, March 1981, pages 17-41.
-
Jorge More, Burton Garbow, Kenneth Hillstrom,
Algorithm 566:
Fortran Subroutines for Testing unconstrained optimization software,
ACM Transactions on Mathematical Software,
Volume 7, Number 1, March 1981, pages 136-140.
-
Michael Powell,
An Efficient Method for Finding the Minimum of a Function of
Several Variables Without Calculating Derivatives,
Computer Journal,
Volume 7, Number 2, 1964, pages 155-162.
-
Douglas Salane,
A continuation approach for solving large residual nonlinear
least squares problems,
SIAM Journal on Scientific and Statistical Computing,
Volume 8, 1987, pages 655-671.
Source Code:
Last revised on 10 February 2026.