TOMS611
Unconstrained Minimization
TOMS611
is a FORTRAN90 library which
carries out the unconstrained minimization of
a scalar function, by David Gay.
TOMS611 is a FORTRAN90 implementation of
ACM TOMS algorithm 611.
TOMS611 contains routines for the general unconstrained
minimization of a scalar function of several variables. The routines
use a model/trustregion approach, and the doubledogleg technique
of Dennis and Mei. In cases where the Hessian is not supplied by the
user, the BFGS secant update is used instead.
Three different implementations of the algorithm are available,
which allow the user to supply just the function, the function
and gradient, or function, gradient and hessian.
The user may also choose to supply the information about the
function through subroutines, or to use a version of the algorithm
that employs "reverse communication", allowing the user to
evaluate the function in any suitable way.
The original version of TOMS 611 is available
through ACM:
http://www.acm.org/pubs/calgo
or NETLIB:
http://www.netlib.org/toms/index.html
Languages:
TOMS611 is available in
a FORTRAN77 version and
a FORTRAN90 version.
Related Data and Programs:
BRENT,
a FORTRAN90 library which
contains routines for finding zeroes or minima of a scalar
function of a scalar variable, without the use of derivative information,
by Richard Brent.
BVLS,
a FORTRAN90 library which
applies least squares methods to solve a linear system for which
lower and upper constraints may have been placed on every variable.
COMPASS_SEARCH,
a FORTRAN90 library which
seeks the minimizer of a scalar function of several variables
using compass search, a direct search algorithm that does not use derivatives.
TEST_OPT,
a FORTRAN90 library which
defines test problems for the minimization of a scalar function
of several variables.
TEST_OPTIMIZATION,
a FORTRAN90 library which
defines test problems for the minimization of a scalar function
of several variables, as described by Molga and Smutnicki.
Author:
David Gay
Reference:

Alan Cline, Cleve Moler, Pete Stewart, James Wilkinson,
An estimate for the Condition Number of a Matrix,
Technical Report TM310,
Argonne National Laboratory, 1977.

John Dennis, David Gay, Roy Welsch,
Algorithm 573:
An Adaptive Nonlinear LeastSquares Algorithm,
ACM Transactions on Mathematical Software,
Volume 7, Number 3, September 1981, pages 367383.

John Dennis, Howell Mei,
Two New Unconstrained Optimization Algorithms Which Use
Function and Gradient Values,
Journal of Optimization Theory and Applications,
Volume 28, 1979, pages 453482.

John Dennis, Jorge More,
QuasiNewton Methods, Motivation and Theory,
SIAM Review,
Volume 19, Number 1, January 1977, pages 4689.

David Gay,
Algorithm 611:
Subroutines for Unconstrained
Minimization Using a Model/Trust Region Approach,
ACM Transactions on Mathematical Software,
Volume 9, Number 4, December 1983, pages 503524.

David Gay,
Computing Optimal Locally Constrained Steps,
SIAM Journal on Scientific and Statistical Computing,
Volume 2, Number 2, June 1981, pages 186197.

Donald Goldfarb,
Factorized Variable Metric Methods for Unconstrained Optimization,
Mathematics of Computation,
Volume 30, Number 136, October 1976, pages 796811.

Steven Goldfeld, Richard Quandt, Hale Trotter,
Maximization by Quadratic Hillclimbing,
Econometrica,
Volume 34, 1966, pages 541551.

Michael Hebden,
An Algorithm for Minimization Using Exact Second Derivatives,
Technical Report: TP 515,
Theoretical Physics Division,
AERE Harwell, 1973.

Jorge More,
The LevenbergMarquardt Algorithm, Implementation and Theory,
in Springer Lecture Notes in Mathematics, Number 630,
edited by GA Watson,
Springer, 1978,
LC: QA3.L28 Number 630.

Jorge More, Danny Sorensen,
Computing a Trust Region Step,
Technical Report ANL8183,
Argonne National Laboratory, 1981.

Michael Powell,
A Hybrid Method for Nonlinear Equations,
in Numerical Methods for Nonlinear Algebraic Equations,
edited by Philip Rabinowitz,
Gordon and Breach, 1970,
ISBN13: 9780677142302,
LC: QA218.N85.

Michael Powell,
A Fortran Subroutine for Solving Systems of Nonlinear
Algebraic Equations,
in Numerical Methods for Nonlinear Algebraic Equations,
edited by Philip Rabinowitz,
Gordon and Breach, 1970,
ISBN13: 9780677142302,
LC: QA218.N85.

Pete Stewart,
A Modification of Davidon's Minimization Method to Accept Difference
Approximations of Derivatives,
Journal of the ACM,
Volume 14, Number 1, January 1967, pages 7283.

Richard Varga,
Minimal Gerschgorin Sets,
Pacific Journal of Mathematics,
Volume 15, 1965, pages 719729.
Source Code:
Examples and Tests:
List of Routines:

ASSST: assess a candidate step.

DBDOG: compute double dogleg step

DEFLT: supply default values to iv and v.

DOTPRD returns the inner product of the pvectors x and y.

DUPDU: update scale vector d for humsl

GQTST: compute goldfeldquandttrotter step by morehebden technique

HUMIT carries out humsl (unconstrained minimization) iterations, using

HUMSL minimizes a general unconstrained objective function using

ITSUM prints an iteration summary.

LITVMU solves (l**t)*x = y, where l is an n x n lower triangular

LIVMUL solves l*x = y, where l is an n x n lower triangular

LSQRT computes rows n1 through n of the cholesky factor l of

LSVMIN estimates smallest sing. value of packed lower triang. matrix l

LTVMUL computes x = (l**t)*y, where l is an n x n lower

LUPDAT computes lplus = secant update of l

LVMUL computes x = l*y, where l is an n x n lower triangular

PARCK checks parameters, prints changed values.

RELDST compute and return relative difference between x and x0

RMDCON return machine dependent constants used by nl2sol

SGRAD2 computes finite difference gradient by stewart's scheme

SLVMUL sets y = s * x, s = p x p symmetric matrix.

SMSNO minimizes a general unconstrained objective function using

SNOIT iteration driver for smsno...

STOPX ???

SUMIT carry out sumsl (unconstrained minimization) iterations, using

SUMSL minimizes a general unconstrained objective function using

TIMESTAMP prints the current YMDHMS date as a time stamp.

V2NORM returns the 2norm of the pvector x, taking

VAXPY sets w = a*x + y  w, x, y = pvectors, a = scalar

VCOPY sets y = x, where x and y are pvectors

VDFLT supplies default values to V.

VSCOPY sets pvector y to scalar s

VVMULP sets x(i) = y(i) * z(i)**k, 1 <= i <= n (for k = 1 or 1)

WZBFGS compute y and z for lupdat corresponding to bfgs update.
You can go up one level to
the FORTRAN90 source codes.
Last revised on 01 January 2008.