All Algorithms |
Algorithm
| Choose the fminunc algorithm.
Choices are 'quasi-newton' (default) or 'trust-region' . The 'trust-region' algorithm
requires you to provide the gradient (see the description of fun ), or else fminunc uses
the 'quasi-newton' algorithm. For information on
choosing the algorithm, see Choosing the Algorithm. |
CheckGradients | Compare user-supplied derivatives
(gradient of objective) to finite-differencing derivatives. Choices
are false (default) or true .
For optimset , the name is
DerivativeCheck and the values
are 'on' or 'off' .
See Current and Legacy Option Names. |
Diagnostics | Display diagnostic information
about the function to be minimized or solved. Choices are 'off' (default)
or 'on' . |
DiffMaxChange | Maximum change in variables for
finite-difference gradients (a positive scalar). The default is Inf . |
DiffMinChange | Minimum change in variables for
finite-difference gradients (a positive scalar). The default is 0 . |
Display | Level of display (see Iterative Display):
'off' or 'none' displays
no output.
'iter' displays output at each
iteration, and gives the default exit message.
'iter-detailed' displays output
at each iteration, and gives the technical exit message.
'notify' displays output only if
the function does not converge, and gives the default exit message.
'notify-detailed' displays output
only if the function does not converge, and gives the technical exit
message.
'final' (default) displays only
the final output, and gives the default exit message.
'final-detailed' displays only
the final output, and gives the technical exit message.
|
FiniteDifferenceStepSize | Scalar or vector step size factor for finite differences. When
you set FiniteDifferenceStepSize to a vector v , the
forward finite differences delta are delta = v.*sign′(x).*max(abs(x),TypicalX);
where sign′(x) = sign(x) except sign′(0) = 1 .
Central finite differences aredelta = v.*max(abs(x),TypicalX);
Scalar FiniteDifferenceStepSize expands to a vector. The default
is sqrt(eps) for forward finite differences, and eps^(1/3)
for central finite differences.The
trust-region algorithm uses FiniteDifferenceStepSize only
when CheckGradients is set to true .
For optimset , the name is
FinDiffRelStep . See Current and Legacy Option Names. |
FiniteDifferenceType | Finite differences, used to estimate
gradients, are either 'forward' (the default),
or 'central' (centered). 'central' takes
twice as many function evaluations, but should be more accurate. The
trust-region algorithm uses FiniteDifferenceType only
when CheckGradients is set to true .
For optimset , the name is
FinDiffType . See Current and Legacy Option Names. |
FunValCheck | Check whether objective function
values are valid. The default setting, 'off' , does
not perform a check. The 'on' setting displays
an error when the objective function returns a value that is complex , Inf ,
or NaN . |
MaxFunctionEvaluations | Maximum number of function evaluations
allowed, a positive integer. The default value is 100*numberOfVariables .
See Tolerances and Stopping Criteria and Iterations and Function Counts.
For optimset , the name is
MaxFunEvals . See Current and Legacy Option Names. |
MaxIterations | Maximum number of iterations allowed,
a positive integer. The default value is 400 .
See Tolerances and Stopping Criteria and Iterations and Function Counts.
For optimset , the name is
MaxIter . See Current and Legacy Option Names. |
OptimalityTolerance | Termination tolerance on the first-order optimality (a positive
scalar). The default is 1e-6 . See First-Order Optimality Measure.
For optimset , the name is
TolFun . See Current and Legacy Option Names. |
OutputFcn | Specify one or more user-defined functions that an optimization
function calls at each iteration. Pass a function handle
or a cell array of function handles. The default is none
([] ). See Output Function and Plot Function Syntax. |
PlotFcn | Plots various measures of progress while the algorithm executes;
select from predefined plots or write your own. Pass a
built-in plot function name, a function handle, or a
cell array of built-in plot function names or function
handles. For custom plot functions, pass function
handles. The default is none
([] ):
'optimplotx' plots the
current point.
'optimplotfunccount'
plots the function count.
'optimplotfval' plots the
function value.
'optimplotstepsize' plots
the step size.
'optimplotfirstorderopt'
plots the first-order optimality measure.
Custom plot functions use the same syntax
as output functions. See Output Functions for Optimization Toolbox™ and Output Function and Plot Function Syntax. For
optimset , the name is
PlotFcns . See Current and Legacy Option Names. |
SpecifyObjectiveGradient | Gradient for the objective function
defined by the user. See the description of fun to see how to define the gradient in fun .
Set to true to have fminunc use
a user-defined gradient of the objective function. The default false causes fminunc to
estimate gradients using finite differences. You must provide the
gradient, and set SpecifyObjectiveGradient to true ,
to use the trust-region algorithm. This option is not required for
the quasi-Newton algorithm.
For optimset , the name is
GradObj and the values are
'on' or 'off' .
See Current and Legacy Option Names. |
StepTolerance | Termination tolerance on x ,
a positive scalar. The default value is 1e-6 . See Tolerances and Stopping Criteria.
For optimset , the name is
TolX . See Current and Legacy Option Names. |
TypicalX | Typical x values.
The number of elements in TypicalX is equal to
the number of elements in x0 , the starting point.
The default value is ones(numberofvariables,1) . fminunc uses TypicalX for
scaling finite differences for gradient estimation. The trust-region algorithm
uses TypicalX only for the CheckGradients option. |
trust-region Algorithm |
FunctionTolerance | Termination tolerance on the function
value, a positive scalar. The default is 1e-6 .
See Tolerances and Stopping Criteria.
For optimset , the name is
TolFun . See Current and Legacy Option Names. |
HessianFcn | If set to [] (default), fminunc approximates
the Hessian using finite differences. If set to 'objective' , fminunc uses
a user-defined Hessian for the objective function. The Hessian is
the third output of the objective function (see fun ).
For optimset , the name is
HessFcn . See Current and Legacy Option Names. |
HessianMultiplyFcn | Hessian multiply function, specified as a function handle. For
large-scale structured problems, this function computes
the Hessian matrix product H*Y
without actually forming H . The
function is of the
form where
Hinfo contains the matrix used to
compute H*Y . The first
argument is the same as the third argument returned by
the objective function fun , for
example Y
is a matrix that has the same number of rows as there
are dimensions in the problem. The matrix W =
H*Y , although H is not
formed explicitly. fminunc uses
Hinfo to compute the
preconditioner. For information on how to supply values
for any additional parameters hmfun
needs, see Passing Extra Parameters.
Note To use the HessianMultiplyFcn
option, HessianFcn must be set to
[] . For an example, see Minimization with Dense Structured Hessian, Linear Equalities. For optimset , the
name is HessMult . See Current and Legacy Option Names. |
HessPattern | Sparsity pattern of the Hessian
for finite differencing. Set HessPattern(i,j) = 1 when
you can have ∂2fun /∂x(i) ∂x(j) ≠ 0. Otherwise, set HessPattern(i,j)
= 0 . Use HessPattern when
it is inconvenient to compute the Hessian matrix H in fun ,
but you can determine (say, by inspection) when the i th
component of the gradient of fun depends on x(j) . fminunc can
approximate H via sparse finite differences (of
the gradient) if you provide the sparsity structure of H as
the value for HessPattern . In other words, provide
the locations of the nonzeros. When the structure is unknown,
do not set HessPattern . The default behavior is
as if HessPattern is a dense matrix of ones. Then fminunc computes
a full finite-difference approximation in each iteration. This computation
can be expensive for large problems, so it is usually better to determine
the sparsity structure. |
MaxPCGIter | Maximum number of preconditioned
conjugate gradient (PCG) iterations, a positive scalar. The default
is max(1,floor(numberOfVariables/2)) . For more
information, see Trust Region Algorithm. |
PrecondBandWidth | Upper bandwidth of preconditioner
for PCG, a nonnegative integer. By default, fminunc uses
diagonal preconditioning (upper bandwidth of 0). For some problems,
increasing the bandwidth reduces the number of PCG iterations. Setting PrecondBandWidth to Inf uses
a direct factorization (Cholesky) rather than the conjugate gradients
(CG). The direct factorization is computationally more expensive than
CG, but produces a better quality step towards the solution. |
SubproblemAlgorithm | Determines how the iteration step
is calculated. The default, 'cg' , takes a faster
but less accurate step than 'factorization' . See fminunc trust-region Algorithm. |
TolPCG | Termination tolerance on the PCG
iteration, a positive scalar. The default is 0.1 . |
quasi-newton Algorithm |
HessUpdate | Method for choosing the search
direction in the Quasi-Newton algorithm. The choices are: |
ObjectiveLimit | A tolerance (stopping criterion)
that is a scalar. If the objective function value at an iteration
is less than or equal to ObjectiveLimit , the iterations
halt because the problem is presumably unbounded. The default value
is -1e20 . |
UseParallel | When true , fminunc estimates
gradients in parallel. Disable by setting to the default, false . trust-region requires
a gradient in the objective, so UseParallel does
not apply. See Parallel Computing. |