All Algorithms 
Algorithm
 Choose the fminunc algorithm.
Choices are 'quasinewton' (default) or 'trustregion' . The 'trustregion' algorithm
requires you to provide the gradient (see the description of fun ), or else fminunc uses
the 'quasinewton' algorithm. For information on
choosing the algorithm, see Choosing the Algorithm. 
CheckGradients  Compare usersupplied derivatives
(gradient of objective) to finitedifferencing derivatives. Choices
are false (default) or true .
For optimset , the name is
DerivativeCheck and the values
are 'on' or 'off' .
See Current and Legacy Option Name Tables. 
Diagnostics  Display diagnostic information
about the function to be minimized or solved. Choices are 'off' (default)
or 'on' . 
DiffMaxChange  Maximum change in variables for
finitedifference gradients (a positive scalar). The default is Inf . 
DiffMinChange  Minimum change in variables for
finitedifference gradients (a positive scalar). The default is 0 . 
Display  Level of display (see Iterative Display):
'off' or 'none' displays
no output.
'iter' displays output at each
iteration, and gives the default exit message.
'iterdetailed' displays output
at each iteration, and gives the technical exit message.
'notify' displays output only if
the function does not converge, and gives the default exit message.
'notifydetailed' displays output
only if the function does not converge, and gives the technical exit
message.
'final' (default) displays only
the final output, and gives the default exit message.
'finaldetailed' displays only
the final output, and gives the technical exit message.

FiniteDifferenceStepSize  Scalar or vector step size factor for finite differences. When
you set FiniteDifferenceStepSize to a vector v , the
forward finite differences delta are delta = v.*sign′(x).*max(abs(x),TypicalX);
where sign′(x) = sign(x) except sign′(0) = 1 .
Central finite differences aredelta = v.*max(abs(x),TypicalX);
Scalar FiniteDifferenceStepSize expands to a vector. The default
is sqrt(eps) for forward finite differences, and eps^(1/3)
for central finite differences.The
trustregion algorithm uses FiniteDifferenceStepSize only
when CheckGradients is set to true .
For optimset , the name is
FinDiffRelStep . See Current and Legacy Option Name Tables. 
FiniteDifferenceType  Finite differences, used to estimate
gradients, are either 'forward' (the default),
or 'central' (centered). 'central' takes
twice as many function evaluations, but should be more accurate. The
trustregion algorithm uses FiniteDifferenceType only
when CheckGradients is set to true .
For optimset , the name is
FinDiffType . See Current and Legacy Option Name Tables. 
FunValCheck  Check whether objective function
values are valid. The default setting, 'off' , does
not perform a check. The 'on' setting displays
an error when the objective function returns a value that is complex , Inf ,
or NaN . 
MaxFunctionEvaluations  Maximum number of function evaluations
allowed, a positive integer. The default value is 100*numberOfVariables .
See Tolerances and Stopping Criteria and Iterations and Function Counts.
For optimset , the name is
MaxFunEvals . See Current and Legacy Option Name Tables. 
MaxIterations  Maximum number of iterations allowed,
a positive integer. The default value is 400 .
See Tolerances and Stopping Criteria and Iterations and Function Counts.
For optimset , the name is
MaxIter . See Current and Legacy Option Name Tables. 
OptimalityTolerance  Termination tolerance on the firstorder optimality (a positive
scalar). The default is 1e6 . See FirstOrder Optimality Measure.
For optimset , the name is
TolFun . See Current and Legacy Option Name Tables. 
OutputFcn  Specify one or more userdefined functions that an optimization
function calls at each iteration. Pass a function handle
or a cell array of function handles. The default is none
([] ). See Output Function Syntax. 
PlotFcn  Plots various measures of progress while the algorithm executes;
select from predefined plots or write your own. Pass a
builtin plot function name, a function handle, or a
cell array of builtin plot function names or function
handles. For custom plot functions, pass function
handles. The default is none
([] ):
'optimplotx' plots the
current point.
'optimplotfunccount'
plots the function count.
'optimplotfval' plots the
function value.
'optimplotstepsize' plots
the step size.
'optimplotfirstorderopt'
plots the firstorder optimality measure.
Custom plot functions use the same syntax
as output functions. See Output Functions and Output Function Syntax. For
optimset , the name is
PlotFcns . See Current and Legacy Option Name Tables. 
SpecifyObjectiveGradient  Gradient for the objective function
defined by the user. See the description of fun to see how to define the gradient in fun .
Set to true to have fminunc use
a userdefined gradient of the objective function. The default false causes fminunc to
estimate gradients using finite differences. You must provide the
gradient, and set SpecifyObjectiveGradient to true ,
to use the trustregion algorithm. This option is not required for
the quasiNewton algorithm.
For optimset , the name is
GradObj and the values are
'on' or 'off' .
See Current and Legacy Option Name Tables. 
StepTolerance  Termination tolerance on x ,
a positive scalar. The default value is 1e6 . See Tolerances and Stopping Criteria.
For optimset , the name is
TolX . See Current and Legacy Option Name Tables. 
TypicalX  Typical x values.
The number of elements in TypicalX is equal to
the number of elements in x0 , the starting point.
The default value is ones(numberofvariables,1) . fminunc uses TypicalX for
scaling finite differences for gradient estimation. The trustregion algorithm
uses TypicalX only for the CheckGradients option. 
trustregion Algorithm 
FunctionTolerance  Termination tolerance on the function
value, a positive scalar. The default is 1e6 .
See Tolerances and Stopping Criteria.
For optimset , the name is
TolFun . See Current and Legacy Option Name Tables. 
HessianFcn  If set to [] (default), fminunc approximates
the Hessian using finite differences. If set to 'objective' , fminunc uses
a userdefined Hessian for the objective function. The Hessian is
the third output of the objective function (see fun ).
For optimset , the name is
HessFcn . See Current and Legacy Option Name Tables. 
HessianMultiplyFcn  Hessian multiply function, specified as a function handle. For
largescale structured problems, this function computes
the Hessian matrix product H*Y
without actually forming H . The
function is of the
form where
Hinfo contains the matrix used to
compute H*Y . The first
argument is the same as the third argument returned by
the objective function fun , for
example Y
is a matrix that has the same number of rows as there
are dimensions in the problem. The matrix W =
H*Y , although H is not
formed explicitly. fminunc uses
Hinfo to compute the
preconditioner. For information on how to supply values
for any additional parameters hmfun
needs, see Passing Extra Parameters.
NoteTo use the HessianMultiplyFcn
option, HessianFcn must be set to
[] . For an example, see Minimization with Dense Structured Hessian, Linear Equalities. For optimset , the
name is HessMult . See Current and Legacy Option Name Tables. 
HessPattern  Sparsity pattern of the Hessian
for finite differencing. Set HessPattern(i,j) = 1 when
you can have ∂^{2}fun /∂x(i) ∂x(j) ≠ 0. Otherwise, set HessPattern(i,j)
= 0 . Use HessPattern when
it is inconvenient to compute the Hessian matrix H in fun ,
but you can determine (say, by inspection) when the i th
component of the gradient of fun depends on x(j) . fminunc can
approximate H via sparse finite differences (of
the gradient) if you provide the sparsity structure of H as
the value for HessPattern . In other words, provide
the locations of the nonzeros. When the structure is unknown,
do not set HessPattern . The default behavior is
as if HessPattern is a dense matrix of ones. Then fminunc computes
a full finitedifference approximation in each iteration. This computation
can be expensive for large problems, so it is usually better to determine
the sparsity structure. 
MaxPCGIter  Maximum number of preconditioned
conjugate gradient (PCG) iterations, a positive scalar. The default
is max(1,floor(numberOfVariables/2)) . For more
information, see Trust Region Algorithm. 
PrecondBandWidth  Upper bandwidth of preconditioner
for PCG, a nonnegative integer. By default, fminunc uses
diagonal preconditioning (upper bandwidth of 0). For some problems,
increasing the bandwidth reduces the number of PCG iterations. Setting PrecondBandWidth to Inf uses
a direct factorization (Cholesky) rather than the conjugate gradients
(CG). The direct factorization is computationally more expensive than
CG, but produces a better quality step towards the solution. 
SubproblemAlgorithm  Determines how the iteration step
is calculated. The default, 'cg' , takes a faster
but less accurate step than 'factorization' . See fminunc trustregion Algorithm. 
TolPCG  Termination tolerance on the PCG
iteration, a positive scalar. The default is 0.1 . 
quasinewton Algorithm 
HessUpdate  Method for choosing the search
direction in the QuasiNewton algorithm. The choices are: 
ObjectiveLimit  A tolerance (stopping criterion)
that is a scalar. If the objective function value at an iteration
is less than or equal to ObjectiveLimit , the iterations
halt because the problem is presumably unbounded. The default value
is 1e20 . 
UseParallel  When true , fminunc estimates
gradients in parallel. Disable by setting to the default, false . trustregion requires
a gradient in the objective, so UseParallel does
not apply. See Parallel Computing. 