compass_search
compass_search,
an Octave code which
seeks the minimizer of a scalar function of several variables
using compass search, a direct search algorithm that does not
use derivatives.
The algorithm, which goes back to Fermi and Metropolis, is easy to describe.
The algorithm begins with a starting point X, and a step size DELTA.
For each dimension I, the algorithm considers perturbing X(I) by adding
or subtracting DELTA.
If a perturbation is found which decreases the function, this becomes the
new X. Otherwise DELTA is halved.
The iteration halts when DELTA reaches a minimal value.
The algorithm is not guaranteed to find a global minimum. It can, for
instance, easily be attracted to a local minimum. Moreover, the algorithm
can diverge if, for instance, the function decreases as the argument goes
to infinity.
Usage:
[ x, fx, k ] = compass_search ( @f, m, x,
delta_tol, delta, k_max )
where
-
@f is the "function handle"; that is, either a quoted
expression for the function, or the name of an M-file that defines
the function, preceded by an "@" sign. The function should have the
form function value = f(m,x).
-
m is the number of variables.
-
x is an M-vector containing a starting point for the iteration.
-
delta_tol is the minimum allowed size of DELTA, and must be positive.
-
delta is the starting stepsize.
-
k_max is the maximum number of steps allowed. This is the only
way to avoid an infinite loop if the function decreases as it goes to infinity.
-
x is the program's estimate for the minimizer of the
function;
-
fx is the function value at x.
-
k is the number of steps taken.
Licensing:
The computer code and data files described and made available on this web page
are distributed under
the MIT license
Languages:
compass_search is available in
a C version and
a C++ version and
a FORTRAN90 version and
a MATLAB version and
an Octave version and
a Python version.
Related Data and Programs:
compass_search_test
Author:
John Burkardt
Reference:
-
Evelyn Beale,
On an Iterative Method for Finding a Local Minimum of a Function
of More than One Variable,
Technical Report 25,
Statistical Techniques Research Group,
Princeton University, 1958.
-
Richard Brent,
Algorithms for Minimization without Derivatives,
Dover, 2002,
ISBN: 0-486-41998-3,
LC: QA402.5.B74.
-
Charles Broyden,
A class of methods for solving nonlinear simultaneous equations,
Mathematics of Computation,
Volume 19, 1965, pages 577-593.
-
David Himmelblau,
Applied Nonlinear Programming,
McGraw Hill, 1972,
ISBN13: 978-0070289215,
LC: T57.8.H55.
-
Tamara Kolda, Robert Michael Lewis, Virginia Torczon,
Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods,
SIAM Review,
Volume 45, Number 3, 2003, pages 385-482.
-
Ken McKinnon,
Convergence of the Nelder-Mead simplex method to a nonstationary point,
SIAM Journal on Optimization,
Volume 9, Number 1, 1998, pages 148-158.
-
Zbigniew Michalewicz,
Genetic Algorithms + Data Structures = Evolution Programs,
Third Edition,
Springer, 1996,
ISBN: 3-540-60676-9,
LC: QA76.618.M53.
-
Jorge More, Burton Garbow, Kenneth Hillstrom,
Testing unconstrained optimization software,
ACM Transactions on Mathematical Software,
Volume 7, Number 1, March 1981, pages 17-41.
-
Michael Powell,
An Iterative Method for Finding Stationary Values of a Function
of Several Variables,
Computer Journal,
Volume 5, 1962, pages 147-151.
-
Howard Rosenbrock,
An Automatic Method for Finding the Greatest or Least Value of a Function,
Computer Journal,
Volume 3, 1960, pages 175-184.
Source Code:
Last revised on 04 October 2022.