The Nelder-Mead Optimization Algorithm

NELDER_MEAD is a MATLAB program which seeks the minimizer of a scalar function of several variables, by Jeff Borggaard.

The algorithm is easy to visualize. The user supplies an initial set of points that represent solution estimates. The number of points supplied is one greater than the spatial dimension, so they form a "simplex" - in 2D, this is simply a triangle. The algorithm then evaluates the function at each point on the simplex, and then considers various ways of seeking a better estimate, including replacing one vertex of the simplex by its reflected image, or by shrinking or expanding the simplex. An animation of the procedure looks almost like a little triangular creature trying to blindly feel its way downhill.

Although the user specifies an initial simplex of starting values, the algorithm is not constrained to search only within that simplex. This means that the user cannot force the algorithm to search only within a restricted region.


x_opt = nelder_mead ( simplex, f, flag )

Very simple functions can be input as a quoted string. Thus, one could specify the f argument as '(x(1)-2*x(2)+7)^2'; However, for more complicated functions it makes sense to prepare an M-file that defines the function. For this same example, a suitable M-file would be:

        function f = example ( x )
        f = ( x(1) - 2 * x(2) + 7 )^2;

If this information was stored in an M-file called example.m, then one might invoke the optimization program with a command like

        x_opt = nelder_mead ( x_init, @example, 0 )

MATLAB's built in command fminsearch minimizes a scalar function of several variables using the Nelder-Mead algorithm.


The computer code and data files described and made available on this web page are distributed under the GNU LGPL license.


NELDER_MEAD is available in a MATLAB version.

Related Data and Programs:

ASA047, a MATLAB library which minimizes a scalar function of several variables using the Nelder-Mead algorithm.

COMPASS_SEARCH, a MATLAB library which seeks the minimizer of a scalar function of several variables using compass search, a direct search algorithm that does not use derivatives.

ENTRUST, a MATLAB program which minimizes a scalar function of several variables using trust-region methods.

POLYNOMIALS, a MATLAB library which defines multivariate polynomials over rectangular domains, for which certain information is to be determined, such as the maximum and minimum values.

PRAXIS, a MATLAB library which implements the principal axis method for minimization of a function without the use of derivatives, by Richard Brent.

TEST_OPT, a MATLAB library which defines test problems requiring the minimization of a scalar function of several variables.

TOMS178, a MATLAB library which optimizes a scalar functional of multiple variables using the Hooke-Jeeves method.


Jeff Borggaard, Mathematics Department, Virginia Tech.


  1. Evelyn Beale,
    On an Iterative Method for Finding a Local Minimum of a Function of More than One Variable,
    Technical Report 25,
    Statistical Techniques Research Group,
    Princeton University, 1958.
  2. Richard Brent,
    Algorithms for Minimization without Derivatives,
    Dover, 2002,
    ISBN: 0-486-41998-3,
    LC: QA402.5.B74.
  3. David Himmelblau,
    Applied Nonlinear Programming,
    McGraw Hill, 1972,
    ISBN13: 978-0070289215,
    LC: T57.8.H55.
  4. Jeffrey Lagarias, James Reeds, Margaret Wright, Paul Wright,
    Convergence properties of the Nelder-Mead simplex method in low dimensions,
    SIAM Journal on Optimization,
    Volume 9, Number 1, 1998, pages 112-147.
  5. Ken McKinnon,
    Convergence of the Nelder-Mead simplex method to a nonstationary point,
    SIAM Journal on Optimization,
    Volume 9, Number 1, 1998, pages 148-158.
  6. Zbigniew Michalewicz,
    Genetic Algorithms + Data Structures = Evolution Programs,
    Third Edition,
    Springer, 1996,
    ISBN: 3-540-60676-9,
    LC: QA76.618.M53.
  7. John Nelder, Roger Mead,
    A simplex method for function minimization,
    Computer Journal,
    Volume 7, Number 4, January 1965, pages 308-313.
  8. Michael Powell,
    An Iterative Method for Finding Stationary Values of a Function of Several Variables,
    Computer Journal,
    Volume 5, 1962, pages 147-151.
  9. William Press, Brian Flannery, Saul Teukolsky, William Vetterling,
    Numerical Recipes in FORTRAN: The Art of Scientific Computing,
    Second Edition,
    Cambridge University Press, 1992,
    ISBN: 0-521-43064-X,
    LC: QA297.N866.
  10. Howard Rosenbrock,
    An Automatic Method for Finding the Greatest or Least Value of a Function,
    Computer Journal,
    Volume 3, 1960, pages 175-184.

Source Code:

Examples and Tests:

BEALE is the Beale function, for which N = 2.

BOHACH1 is the Bohachevsky function #1, for which N=2.

BOHACH2 is the Bohachevsky function #2, for which N=2.

EXTENDED_ROSENBROCK is the "extended" Rosenbrock function. This version of the Rosenbrock function allows the spatial dimension to be arbitrary, except that it must be even.

GOLDSTEIN_PRICE is the Goldstein-Price polynomial, for which N=2.

HIMMELBLAU is the Himmelblau function:

f(x) = (x(1)^2 + x(2) - 11)^2 + (x(1) + x(2)^2 - 7)^2
which has four global minima.

LOCAL is a badly scaled function with a local minimum, for which N=2.

MCKINNON is the McKinnon function, for which N=2. This function can cause problems for the Nelder-Mead optimization algorithm.

POWELL is the Powell singular quartic function, for which N = 4.

ROSENBROCK is the Rosenbrock "banana" function. The contour lines form a nested set of "banana", which can make convergence very slow:

f(x) = ( 1 - x(1) )^2 + 100 * ( x(2) - x(1) * x(1) )^2

You can go up one level to the MATLAB source codes.

Last revised on 06 September 2010.