Often it is useful to find the minimum value of a function rather than just
the zeroes where it crosses the x-axis. fminbnd
is designed for the
simpler, but very common, case of a univariate function where the interval
to search is bounded. For unbounded minimization of a function with
potentially many variables use fminunc
or fminsearch
. The two
functions use different internal algorithms and some knowledge of the objective
function is required. For functions which can be differentiated,
fminunc
is appropriate. For functions with discontinuities, or for
which a gradient search would fail, use fminsearch
.
See Optimization, for minimization with the presence of constraint
functions. Note that searches can be made for maxima by simply inverting the
objective function
(Fto_max = -Fto_min
).
x =
fminbnd (fcn, a, b)
¶x =
fminbnd (fcn, a, b, options)
¶[x, fval, info, output] =
fminbnd (…)
¶Find a minimum point of a univariate function.
fcn is a function handle, inline function, or string containing the name of the function to evaluate.
The starting interval is specified by a (left boundary) and b (right boundary). The endpoints must be finite.
options is a structure specifying additional parameters which
control the algorithm. Currently, fminbnd
recognizes these options:
"Display"
, "FunValCheck"
, "MaxFunEvals"
,
"MaxIter"
, "OutputFcn"
, "TolX"
.
"MaxFunEvals"
proscribes the maximum number of function evaluations
before optimization is halted. The default value is 500.
The value must be a positive integer.
"MaxIter"
proscribes the maximum number of algorithm iterations
before optimization is halted. The default value is 500.
The value must be a positive integer.
"TolX"
specifies the termination tolerance for the solution x.
The default is 1e-4
.
For a description of the other options,
see optimset
.
To initialize an options structure with default values for fminbnd
use options = optimset ("fminbnd")
.
On exit, the function returns x, the approximate minimum point, and fval, the function evaluated x.
The third output info reports whether the algorithm succeeded and may take one of the following values:
MaxIter
or MaxFunEvals
) exceeded.
OutputFcn
.
Programming Notes: The search for a minimum is restricted to be in the
finite interval bound by a and b. If you have only one initial
point to begin searching from then you will need to use an unconstrained
minimization algorithm such as fminunc
or fminsearch
.
fminbnd
internally uses a Golden Section search strategy.
See also: fzero, fminunc, fminsearch, optimset.
x =
fminunc (fcn, x0)
¶x =
fminunc (fcn, x0, options)
¶[x, fval] =
fminunc (fcn, …)
¶[x, fval, info] =
fminunc (fcn, …)
¶[x, fval, info, output] =
fminunc (fcn, …)
¶[x, fval, info, output, grad] =
fminunc (fcn, …)
¶[x, fval, info, output, grad, hess] =
fminunc (fcn, …)
¶Solve an unconstrained optimization problem defined by the function fcn.
fminunc
attempts to determine a vector x such that
fcn (x)
is a local minimum.
fcn is a function handle, inline function, or string containing the name of the function to evaluate. fcn should accept a vector (array) defining the unknown variables, and return the objective function value, optionally with gradient.
x0 determines a starting guess. The shape of x0 is preserved in all calls to fcn, but otherwise is treated as a column vector.
options is a structure specifying additional parameters which
control the algorithm. Currently, fminunc
recognizes these options:
"AutoScaling"
, "FinDiffType"
, "FunValCheck"
,
"GradObj"
, "MaxFunEvals"
, "MaxIter"
,
"OutputFcn"
, "TolFun"
, "TolX"
, "TypicalX"
.
If "AutoScaling"
is "on"
, the variables will be
automatically scaled according to the column norms of the (estimated)
Jacobian. As a result, "TolFun"
becomes scaling-independent. By
default, this option is "off"
because it may sometimes deliver
unexpected (though mathematically correct) results.
If "GradObj"
is "on"
, it specifies that fcn—when
called with two output arguments—also returns the Jacobian matrix of
partial first derivatives at the requested point.
"MaxFunEvals"
proscribes the maximum number of function evaluations
before optimization is halted. The default value is
100 * number_of_variables
, i.e., 100 * length (x0)
.
The value must be a positive integer.
"MaxIter"
proscribes the maximum number of algorithm iterations
before optimization is halted. The default value is 400.
The value must be a positive integer.
"TolX"
specifies the termination tolerance for the unknown variables
x, while "TolFun"
is a tolerance for the objective function
value fval. The default is 1e-6
for both options.
For a description of the other options,
see optimset
.
On return, x is the location of the minimum and fval contains the value of the objective function at x.
info may be one of the following values:
Converged to a solution point. Relative gradient error is less than
specified by TolFun
.
Last relative step size was less than TolX
.
Last relative change in function value was less than TolFun
.
Iteration limit exceeded—either maximum number of algorithm iterations
MaxIter
or maximum number of function evaluations MaxFunEvals
.
Algorithm terminated by OutputFcn
.
The trust region radius became excessively small.
Optionally, fminunc
can return a structure with convergence
statistics (output), the output gradient (grad) at the
solution x, and approximate Hessian (hess) at the solution
x.
Application Notes: If the objective function is a single nonlinear equation
of one variable then using fminbnd
is usually a better choice.
The algorithm used by fminunc
is a gradient search which depends
on the objective function being differentiable. If the function has
discontinuities it may be better to use a derivative-free algorithm such as
fminsearch
.
See also: fminbnd, fminsearch, optimset.
x =
fminsearch (fcn, x0)
¶x =
fminsearch (fcn, x0, options)
¶x =
fminsearch (problem)
¶[x, fval, exitflag, output] =
fminsearch (…)
¶Find a value of x which minimizes the multi-variable function fcn.
fcn is a function handle, inline function, or string containing the name of the function to evaluate.
The search begins at the point x0 and iterates using the
Nelder & Mead Simplex algorithm (a derivative-free method). This
algorithm is better-suited to functions which have discontinuities or for
which a gradient-based search such as fminunc
fails.
Options for the search are provided in the parameter options using the
function optimset
. Currently, fminsearch
accepts the options:
"Display"
, "FunValCheck"
,"MaxFunEvals"
,
"MaxIter"
, "OutputFcn"
, "TolFun"
, "TolX"
.
"MaxFunEvals"
proscribes the maximum number of function evaluations
before optimization is halted. The default value is
200 * number_of_variables
, i.e., 200 * length (x0)
.
The value must be a positive integer.
"MaxIter"
proscribes the maximum number of algorithm iterations
before optimization is halted. The default value is
200 * number_of_variables
, i.e., 200 * length (x0)
.
The value must be a positive integer.
For a description of the other options,
see optimset
. To initialize an options structure
with default values for fminsearch
use
options = optimset ("fminsearch")
.
fminsearch
may also be called with a single structure argument
with the following fields:
objective
The objective function.
x0
The initial point.
solver
Must be set to "fminsearch"
.
options
A structure returned from optimset
or an empty matrix to
indicate that defaults should be used.
The field options
is optional. All others are required.
On exit, the function returns x, the minimum point, and fval, the function value at the minimum.
The third output exitflag reports whether the algorithm succeeded and may take one of the following values:
if the algorithm converged
(size of the simplex is smaller than TolX
AND the step in
function value between iterations is smaller than TolFun
).
if the maximum number of iterations or the maximum number of function evaluations are exceeded.
if the iteration is stopped by the "OutputFcn"
.
The fourth output is a structure output containing runtime
about the algorithm. Fields in the structure are funcCount
containing the number of function calls to fcn, iterations
containing the number of iteration steps, algorithm
with the name of
the search algorithm (always:
"Nelder-Mead simplex direct search"
), and message
with the exit message.
Example:
fminsearch (@(x) (x(1)-5).^2+(x(2)-8).^4, [0;0])
Note: If you need to find the minimum of a single variable function it is
probably better to use fminbnd
.
The function humps
is a useful function for testing zero and
extrema finding functions.
y =
humps (x)
¶[x, y] =
humps (x)
¶Evaluate a function with multiple minima, maxima, and zero crossings.
The output y is the evaluation of the rational function:
1200*x^4 - 2880*x^3 + 2036*x^2 - 348*x - 88 y = - --------------------------------------------- 200*x^4 - 480*x^3 + 406*x^2 - 138*x + 17
x may be a scalar, vector or array. If x is omitted, the default range [0:0.05:1] is used.
When called with two output arguments, [x, y], x will
contain the input values, and y will contain the output from
humps
.
Programming Notes: humps
has two local maxima located near x =
0.300 and 0.893, a local minimum near x = 0.637, and zeros near
x = -0.132 and 1.300. humps
is a useful function for testing
algorithms which find zeros or local minima and maxima.
Try demo humps
to see a plot of the humps
function.
See also: fzero, fminbnd, fminunc, fminsearch.