# Documentation

### This is machine translation

Translated by
Mouse over text to see original. Click the button below to return to the English verison of the page.

# lsqnonlin

Solve nonlinear least-squares (nonlinear data-fitting) problems

Nonlinear least-squares solver

Solves nonlinear least-squares curve fitting problems of the form

$\underset{x}{\mathrm{min}}{‖f\left(x\right)‖}_{2}^{2}=\underset{x}{\mathrm{min}}\left({f}_{1}{\left(x\right)}^{2}+{f}_{2}{\left(x\right)}^{2}+...+{f}_{n}{\left(x\right)}^{2}\right)$

with optional lower and upper bounds lb and ub on the components of x.

x, lb, and ub can be vectors or matrices; see Matrix Arguments.

Rather than compute the value ${‖f\left(x\right)‖}_{2}^{2}$ (the sum of squares), lsqnonlin requires the user-defined function to compute the vector-valued function

$f\left(x\right)=\left[\begin{array}{c}{f}_{1}\left(x\right)\\ {f}_{2}\left(x\right)\\ ⋮\\ {f}_{n}\left(x\right)\end{array}\right].$

## Syntax

• x = lsqnonlin(fun,x0)
example
• x = lsqnonlin(fun,x0,lb,ub)
example
• x = lsqnonlin(fun,x0,lb,ub,options)
example
• x = lsqnonlin(problem)
• [x,resnorm] = lsqnonlin(___)
example
• [x,resnorm,residual,exitflag,output] = lsqnonlin(___)
example
• [x,resnorm,residual,exitflag,output,lambda,jacobian] = lsqnonlin(___)

## Description

example

x = lsqnonlin(fun,x0) starts at the point x0 and finds a minimum of the sum of squares of the functions described in fun. The function fun should return a vector of values and not the sum of squares of the values. (The algorithm implicitly computes the sum of squares of the components of fun(x).)Note:   Passing Extra Parameters explains how to pass extra parameters to the vector function fun(x), if necessary.

example

x = lsqnonlin(fun,x0,lb,ub) defines a set of lower and upper bounds on the design variables in x, so that the solution is always in the range lb ≤ x ≤ ub. You can fix the solution component x(i) by specifying lb(i) = ub(i).Note:   If the specified input bounds for a problem are inconsistent, the output x is x0 and the outputs resnorm and residual are [].Components of x0 that violate the bounds lb ≤ x ≤ ub are reset to the interior of the box defined by the bounds. Components that respect the bounds are not changed.

example

x = lsqnonlin(fun,x0,lb,ub,options) minimizes with the optimization options specified in options. Use optimoptions to set these options. Pass empty matrices for lb and ub if no bounds exist.
x = lsqnonlin(problem) finds the minimum for problem, where problem is a structure described in Input Arguments. Create the problem structure by exporting a problem from Optimization app, as described in Exporting Your Work.

example

[x,resnorm] = lsqnonlin(___), for any input arguments, returns the value of the squared 2-norm of the residual at x: sum(fun(x).^2).

example

[x,resnorm,residual,exitflag,output] = lsqnonlin(___) additionally returns the value of the residual fun(x) at the solution x, a value exitflag that describes the exit condition, and a structure output that contains information about the optimization process.
[x,resnorm,residual,exitflag,output,lambda,jacobian] = lsqnonlin(___) additionally returns a structure lambda whose fields contain the Lagrange multipliers at the solution x, and the Jacobian of fun at the solution x.

## Examples

collapse all

Fit a simple exponential decay curve to data.

Generate data from an exponential decay model plus noise. The model is

 

with ranging from 0 through 3, and normally distributed noise with mean 0 and standard deviation 0.05.

rng default % for reproducibility d = linspace(0,3); y = exp(-1.3*d) + 0.05*randn(size(d)); 

The problem is: given the data (d, y), find the exponential decay rate that best fits the data.

Create an anonymous function that takes a value of the exponential decay rate and returns a vector of differences from the model with that decay rate and the data.

fun = @(r)exp(-d*r)-y; 

Find the value of the optimal decay rate. Arbitrarily choose an initial guess x0 = 4.

x0 = 4; x = lsqnonlin(fun,x0) 
Local minimum possible. lsqnonlin stopped because the final change in the sum of squares relative to its initial value is less than the default value of the function tolerance. x = 1.2645 

Plot the data and the best-fitting exponential curve.

plot(d,y,'ko',d,exp(-x*d),'b-') legend('Data','Best fit') xlabel('t') ylabel('exp(-tx)') 

Find the best-fitting model when some of the fitting parameters have bounds.

Find a centering and scaling that best fit the function

 

to the standard normal density,

 

Create a vector t of data points, and the corresponding normal density at those points.

t = linspace(-4,4); y = 1/sqrt(2*pi)*exp(-t.^2/2); 

Create a function that evaluates the difference between the centered and scaled function from the normal y, with x(1) as the scaling and x(2) as the centering .

fun = @(x)x(1)*exp(-t).*exp(-exp(-(t-x(2)))) - y; 

Find the optimal fit starting from x0 = [1/2,0], with the scaling between 1/2 and 3/2, and the centering between -1 and 3.

lb = [1/2,-1]; ub = [3/2,3]; x0 = [1/2,0]; x = lsqnonlin(fun,x0,lb,ub) 
Local minimum possible. lsqnonlin stopped because the final change in the sum of squares relative to its initial value is less than the default value of the function tolerance. x = 0.8231 -0.2444 

Plot the two functions to see the quality of the fit.

plot(t,y,'r-',t,fun(x)+y,'b-') xlabel('t') legend('Normal density','Fitted function') 

Compare the results of a data-fitting problem when using different lsqnonlin algorithms.

Suppose that you have observation time data xdata and observed response data ydata, and you want to find parameters and to fit a model of the form

 

Input the observation times and responses.

xdata = ... [0.9 1.5 13.8 19.8 24.1 28.2 35.2 60.3 74.6 81.3]; ydata = ... [455.2 428.6 124.1 67.3 43.2 28.1 13.1 -0.4 -1.3 -1.5]; 

Create a simple exponential decay model. The model computes a vector of differences between predicted values and observed values.

fun = @(x)x(1)*exp(x(2)*xdata)-ydata; 

Fit the model using the starting point x0 = [100,-1]. First, use the default 'trust-region-reflective' algorithm.

x0 = [100,-1]; options = optimoptions(@lsqnonlin,'Algorithm','trust-region-reflective'); x = lsqnonlin(fun,x0,[],[],options) 
Local minimum possible. lsqnonlin stopped because the final change in the sum of squares relative to its initial value is less than the default value of the function tolerance. x = 498.8309 -0.1013 

See if there is any difference using the 'levenberg-marquardt algorithm.

options.Algorithm = 'levenberg-marquardt'; x = lsqnonlin(fun,x0,[],[],options) 
Local minimum possible. lsqnonlin stopped because the relative size of the current step is less than the default value of the step size tolerance. x = 498.8309 -0.1013 

The two algorithms found the same solution. Plot the solution and the data.

plot(xdata,ydata,'ko') hold on tlist = linspace(xdata(1),xdata(end)); plot(tlist,x(1)*exp(x(2)*tlist),'b-') xlabel xdata ylabel ydata title('Exponential Fit to Data') legend('Data','Exponential Fit') hold off 

Find the x that minimizes

$\sum _{k=1}^{10}{\left(2+2k-{e}^{k{x}_{1}}-{e}^{k{x}_{2}}\right)}^{2},$

and find the value of the minimal sum of squares.

Because lsqnonlin assumes that the sum of squares is not explicitly formed in the user-defined function, the function passed to lsqnonlin should instead compute the vector-valued function

${F}_{k}\left(x\right)=2+2k-{e}^{k{x}_{1}}-{e}^{k{x}_{2}},$

for k = 1 to 10 (that is, F should have 10 components).

First, write a file to compute the 10-component vector F.

function F = myfun(x) k = 1:10; F = 2 + 2*k-exp(k*x(1))-exp(k*x(2));

Find the minimizing point and the minimum value, starting at the point x0 = [0.3,0.4].

x0 = [0.3,0.4]; [x,resnorm] = lsqnonlin(@myfun,x0);

After about 24 function evaluations, this example gives the solution

x,resnorm
x = 0.2578 0.2578 resnorm = 124.3622

Examine the solution process both as it occurs (by setting the Display option to 'iter') and afterwards (by examining the output structure).

Suppose that you have observation time data xdata and observed response data ydata, and you want to find parameters and to fit a model of the form

 

Input the observation times and responses.

xdata = ... [0.9 1.5 13.8 19.8 24.1 28.2 35.2 60.3 74.6 81.3]; ydata = ... [455.2 428.6 124.1 67.3 43.2 28.1 13.1 -0.4 -1.3 -1.5]; 

Create a simple exponential decay model. The model computes a vector of differences between predicted values and observed values.

fun = @(x)x(1)*exp(x(2)*xdata)-ydata; 

Fit the model using the starting point x0 = [100,-1]. Examine the solution process by setting the Display option to 'iter'. Obtain an output structure to obtain more information about the solution process.

x0 = [100,-1]; options = optimoptions('lsqnonlin','Display','iter'); [x,resnorm,residual,exitflag,output] = lsqnonlin(fun,x0,[],[],options); 
 Norm of First-order Iteration Func-count f(x) step optimality 0 3 359677 2.88e+04 Objective function returned Inf; trying a new point... 1 6 359677 11.6976 2.88e+04 2 9 321395 0.5 4.97e+04 3 12 321395 1 4.97e+04 4 15 292253 0.25 7.06e+04 5 18 292253 0.5 7.06e+04 6 21 270350 0.125 1.15e+05 7 24 270350 0.25 1.15e+05 8 27 252777 0.0625 1.63e+05 9 30 252777 0.125 1.63e+05 10 33 243877 0.03125 7.48e+04 11 36 243660 0.0625 8.7e+04 12 39 243276 0.0625 2e+04 13 42 243174 0.0625 1.14e+04 14 45 242999 0.125 5.1e+03 15 48 242661 0.25 2.04e+03 16 51 241987 0.5 1.91e+03 17 54 240643 1 1.04e+03 18 57 237971 2 3.36e+03 19 60 232686 4 6.04e+03 20 63 222354 8 1.2e+04 21 66 202592 16 2.25e+04 22 69 166443 32 4.05e+04 23 72 106320 64 6.68e+04 24 75 28704.7 128 8.31e+04 25 78 89.7947 140.674 2.22e+04 26 81 9.57381 2.02599 684 27 84 9.50489 0.0619927 2.27 28 87 9.50489 0.000462263 0.0114 Local minimum possible. lsqnonlin stopped because the final change in the sum of squares relative to its initial value is less than the default value of the function tolerance. 

output 
output = struct with fields: firstorderopt: 0.0114 iterations: 28 funcCount: 87 cgiterations: 0 algorithm: 'trust-region-reflective' stepsize: 4.6226e-04 message: 'Local minimum possible....' 

For comparison, set the Algorithm option to 'levenberg-marquardt'.

options.Algorithm = 'levenberg-marquardt'; [x,resnorm,residual,exitflag,output] = lsqnonlin(fun,x0,[],[],options); 
 First-Order Norm of Iteration Func-count Residual optimality Lambda step 0 3 359677 2.88e+04 0.01 Objective function returned Inf; trying a new point... 1 13 340761 3.91e+04 100000 0.280777 2 16 304661 5.97e+04 10000 0.373146 3 21 297292 6.55e+04 1e+06 0.0589933 4 24 288240 7.57e+04 100000 0.0645444 5 28 275407 1.01e+05 1e+06 0.0741266 6 31 249954 1.62e+05 100000 0.094571 7 36 245896 1.35e+05 1e+07 0.0133606 8 39 243846 7.26e+04 1e+06 0.00944311 9 42 243568 5.66e+04 100000 0.00821621 10 45 243424 1.61e+04 10000 0.00777935 11 48 243322 8.8e+03 1000 0.0673933 12 51 242408 5.1e+03 100 0.675209 13 54 233628 1.05e+04 10 6.59804 14 57 169089 8.51e+04 1 54.6992 15 60 30814.7 1.54e+05 0.1 196.939 16 63 147.496 8e+03 0.01 129.795 17 66 9.51503 117 0.001 9.96069 18 69 9.50489 0.0714 0.0001 0.080486 19 72 9.50489 4.91e-05 1e-05 5.07033e-05 Local minimum possible. lsqnonlin stopped because the relative size of the current step is less than the default value of the step size tolerance. 

The 'levenberg-marquardt' converged with fewer iterations, but almost as many function evaluations:

output 
output = struct with fields: iterations: 19 funcCount: 72 stepsize: 5.0703e-05 cgiterations: [] firstorderopt: 4.9122e-05 algorithm: 'levenberg-marquardt' message: 'Local minimum possible....' 

## Input Arguments

collapse all

Function whose sum of squares is minimized, specified as a function handle or the name of a function. fun is a function that accepts a vector x and returns a vector F, the objective functions evaluated at x. The function fun can be specified as a function handle to a file:

x = lsqnonlin(@myfun,x0)

where myfun is a MATLAB® function such as

function F = myfun(x) F = ... % Compute function values at x

fun can also be a function handle for an anonymous function.

x = lsqnonlin(@(x)sin(x.*x),x0);

If the user-defined values for x and F are matrices, they are converted to a vector using linear indexing.

 Note   The sum of squares should not be formed explicitly. Instead, your function should return a vector of function values. See Examples.

If the Jacobian can also be computed and the Jacobian option is 'on', set by

options = optimoptions('lsqnonlin','Jacobian','on')

then the function fun must return a second output argument with the Jacobian value J (a matrix) at x. By checking the value of nargout, the function can avoid computing J when fun is called with only one output argument (in the case where the optimization algorithm only needs the value of F but not J).

function [F,J] = myfun(x) F = ... % Objective function values at x if nargout > 1 % Two output arguments J = ... % Jacobian of the function evaluated at x end

If fun returns a vector (matrix) of m components and x has length n, where n is the length of x0, the Jacobian J is an m-by-n matrix where J(i,j) is the partial derivative of F(i) with respect to x(j). (The Jacobian J is the transpose of the gradient of F.)

Example: @(x)cos(x).*exp(-x)

Data Types: char | function_handle

Initial point, specified as a real vector or real array. Solvers use the number of elements in, and size of, x0 to determine the number and size of variables that fun accepts.

Example: x0 = [1,2,3,4]

Data Types: double

Lower bounds, specified as a real vector or real array. If the number of elements in x0 is equal to that of lb, then lb specifies that

x(i) >= lb(i) for all i.

If numel(lb) < numel(x0), then lb specifies that

x(i) >= lb(i) for 1 <= i <= numel(lb).

In this case, solvers issue a warning.

Example: To specify that all x-components are positive, lb = zeros(size(x0))

Data Types: double

Upper bounds, specified as a real vector or real array. If the number of elements in x0 is equal to that of ub, then ub specifies that

x(i) <= ub(i) for all i.

If numel(ub) < numel(x0), then ub specifies that

x(i) <= ub(i) for 1 <= i <= numel(ub).

In this case, solvers issue a warning.

Example: To specify that all x-components are less than one, ub = ones(size(x0))

Data Types: double

Optimization options, specified as the output of optimoptions or a structure as optimset returns.

Some options apply to all algorithms, and others are relevant for particular algorithms. See Optimization Options Reference for detailed information.

Some options are absent from the optimoptions display. These options are listed in italics. For details, see View Options.

All Algorithms

Algorithm

Choose between 'trust-region-reflective' (default) and 'levenberg-marquardt'.

The Algorithm option specifies a preference for which algorithm to use. It is only a preference, because certain conditions must be met to use each algorithm. For the trust-region-reflective algorithm, the nonlinear system of equations cannot be underdetermined; that is, the number of equations (the number of elements of F returned by fun) must be at least as many as the length of x. The Levenberg-Marquardt algorithm does not handle bound constraints. For more information on choosing the algorithm, see Choosing the Algorithm.

CheckGradients

Compare user-supplied derivatives (gradients of objective or constraints) to finite-differencing derivatives. Choices are false (default) or true.

Diagnostics

Display diagnostic information about the function to be minimized or solved. Choices are 'off' (default) or 'on'.

DiffMaxChange

Maximum change in variables for finite-difference gradients (a positive scalar). The default is Inf.

DiffMinChange

Minimum change in variables for finite-difference gradients (a positive scalar). The default is 0.

Display

Level of display (see Iterative Display):

• 'off' or 'none' displays no output.

• 'iter' displays output at each iteration, and gives the default exit message.

• 'iter-detailed' displays output at each iteration, and gives the technical exit message.

• 'final' (default) displays just the final output, and gives the default exit message.

• 'final-detailed' displays just the final output, and gives the technical exit message.

FiniteDifferenceStepSize

Scalar or vector step size factor for finite differences. When you set FiniteDifferenceStepSize to a vector v, forward finite differences steps delta are

delta = v.*sign′(x).*max(abs(x),TypicalX);

where sign′(x) = sign(x) except sign′(0) = 1. Central finite differences are

delta = v.*max(abs(x),TypicalX);

Scalar FiniteDifferenceStepSize expands to a vector. The default is sqrt(eps) for forward finite differences, and eps^(1/3) for central finite differences.

FiniteDifferenceType

Finite differences, used to estimate gradients, are either 'forward' (default), or 'central' (centered). 'central' takes twice as many function evaluations, but should be more accurate.

The algorithm is careful to obey bounds when estimating both types of finite differences. So, for example, it could take a backward, rather than a forward, difference to avoid evaluating at a point outside bounds.

FunctionTolerance

Termination tolerance on the function value, a positive scalar. The default is 1e-6. See Tolerances and Stopping Criteria.

FunValCheck

Check whether function values are valid. 'on' displays an error when the function returns a value that is complex, Inf, or NaN. The default 'off' displays no error.

MaxFunctionEvaluations

Maximum number of function evaluations allowed, a positive integer. The default is 100*numberOfVariables. See Tolerances and Stopping Criteria and Iterations and Function Counts.

MaxIterations

Maximum number of iterations allowed, a positive integer. The default is 400. See Tolerances and Stopping Criteria and Iterations and Function Counts.

OptimalityTolerance

Termination tolerance on the first-order optimality, a positive scalar. The default is 1e-6. See First-Order Optimality Measure.

OutputFcn

Specify one or more user-defined functions that an optimization function calls at each iteration, either as a function handle or as a cell array of function handles. The default is none ([]). See Output Function.

PlotFcn

Plots various measures of progress while the algorithm executes; select from predefined plots or write your own. Pass a function handle or a cell array of function handles. The default is none ([]):

• @optimplotx plots the current point.

• @optimplotfunccount plots the function count.

• @optimplotfval plots the function value.

• @optimplotresnorm plots the norm of the residuals.

• @optimplotstepsize plots the step size.

• @optimplotfirstorderopt plots the first-order optimality measure.

For information on writing a custom plot function, see Plot Functions.

SpecifyObjectiveGradient

If false (default), the solver approximates the Jacobian using finite differences. If true, the solver uses a user-defined Jacobian (defined in fun), or Jacobian information (when using JacobMult), for the objective function.

StepTolerance

Termination tolerance on x, a positive scalar. The default is 1e-6. See Tolerances and Stopping Criteria.

TypicalX

Typical x values. The number of elements in TypicalX is equal to the number of elements in x0, the starting point. The default value is ones(numberofvariables,1). The solver uses TypicalX for scaling finite differences for gradient estimation.

UseParallel

When true, the solver estimates gradients in parallel. Disable by setting to the default, false. See Parallel Computing.

Trust-Region-Reflective Algorithm
JacobianMultiplyFcn

Function handle for Jacobian multiply function. For large-scale structured problems, this function computes the Jacobian matrix product J*Y, J'*Y, or J'*(J*Y) without actually forming J. The function is of the form

W = jmfun(Jinfo,Y,flag)

where Jinfo contains the matrix used to compute J*Y (or J'*Y, or J'*(J*Y)). The first argument Jinfo must be the same as the second argument returned by the objective function fun, for example, by

[F,Jinfo] = fun(x)

Y is a matrix that has the same number of rows as there are dimensions in the problem. flag determines which product to compute:

• If flag == 0 then W = J'*(J*Y).

• If flag > 0 then W = J*Y.

• If flag < 0 then W = J'*Y.

In each case, J is not formed explicitly. The solver uses Jinfo to compute the preconditioner. See Passing Extra Parameters for information on how to supply values for any additional parameters jmfun needs.

 Note   'SpecifyObjectiveGradient' must be set to true for the solver to pass Jinfo from fun to jmfun.

JacobPattern

Sparsity pattern of the Jacobian for finite differencing. Set JacobPattern(i,j) = 1 when fun(i) depends on x(j). Otherwise, set JacobPattern(i,j) = 0. In other words, JacobPattern(i,j) = 1 when you can have ∂fun(i)/∂x(j) ≠ 0.

Use JacobPattern when it is inconvenient to compute the Jacobian matrix J in fun, though you can determine (say, by inspection) when fun(i) depends on x(j). The solver can approximate J via sparse finite differences when you give JacobPattern.

If the structure is unknown, do not set JacobPattern. The default behavior is as if JacobPattern is a dense matrix of ones. Then the solver computes a full finite-difference approximation in each iteration. This can be expensive for large problems, so it is usually better to determine the sparsity structure.

MaxPCGIter

Maximum number of PCG (preconditioned conjugate gradient) iterations, a positive scalar. The default is max(1,numberOfVariables/2). For more information, see Large Scale Nonlinear Least Squares.

PrecondBandWidth

Upper bandwidth of preconditioner for PCG, a nonnegative integer. The default PrecondBandWidth is Inf, which means a direct factorization (Cholesky) is used rather than the conjugate gradients (CG). The direct factorization is computationally more expensive than CG, but produces a better quality step towards the solution. Set PrecondBandWidth to 0 for diagonal preconditioning (upper bandwidth of 0). For some problems, an intermediate bandwidth reduces the number of PCG iterations.

SubproblemAlgorithm

Determines how the iteration step is calculated. The default, 'factorization', takes a slower but more accurate step than 'cg'. See Trust-Region-Reflective Least Squares.

TolPCG

Termination tolerance on the PCG iteration, a positive scalar. The default is 0.1.

Levenberg-Marquardt Algorithm
InitDamping

Initial value of the Levenberg-Marquardt parameter, a positive scalar. Default is 1e-2. For details, see Levenberg-Marquardt Method.

ScaleProblem

'jacobian' can sometimes improve the convergence of a poorly scaled problem; the default is 'none'.

Example: options = optimoptions('lsqnonlin','FiniteDifferenceType','central')

Problem structure, specified as a structure with the following fields:

Field NameEntry

objective

Objective function

x0

Initial point for x
lbVector of lower bounds
ubVector of upper bounds

solver

'lsqnonlin'

options

Options created with optimoptions

You must supply at least the objective, x0, solver, and options fields in the problem structure.

The simplest way of obtaining a problem structure is to export the problem from the Optimization app.

Data Types: struct

## Output Arguments

collapse all

Solution, returned as a real vector or real array. The size of x is the same as the size of x0. Typically, x is a local solution to the problem when exitflag is positive. For information on the quality of the solution, see When the Solver Succeeds.

Squared norm of the residual, returned as a nonnegative real. resnorm is the squared 2-norm of the residual at x: sum(fun(x).^2).

Value of objective function at solution, returned as a vector. In general, residual = fun(x).

Reason the solver stopped, returned as an integer.

 1 Function converged to a solution x. 2 Change in x was less than the specified tolerance. 3 Change in the residual was less than the specified tolerance. 4 Magnitude of search direction was smaller than the specified tolerance. 0 Number of iterations exceeded options.MaxIterations or number of function evaluations exceeded options.MaxFunctionEvaluations. -1 Output function terminated the algorithm. -2 Problem is infeasible: the bounds lb and ub are inconsistent.

Information about the optimization process, returned as a structure with fields:

 firstorderopt Measure of first-order optimality iterations Number of iterations taken funcCount The number of function evaluations cgiterations Total number of PCG iterations (trust-region-reflective algorithm only) stepsize Final displacement in x algorithm Optimization algorithm used message Exit message

Lagrange multipliers at the solution, returned as a structure with fields:

 lower Lower bounds lb upper Upper bounds ub

Jacobian at the solution, returned as a real matrix. jacobian(i,j) is the partial derivative of fun(i) with respect to x(j) at the solution x.

## Limitations

• The Levenberg-Marquardt algorithm does not handle bound constraints.

• The trust-region-reflective algorithm does not solve underdetermined systems; it requires that the number of equations, i.e., the row dimension of F, be at least as great as the number of variables. In the underdetermined case, lsqnonlin uses the Levenberg-Marquardt algorithm.

Since the trust-region-reflective algorithm does not handle underdetermined systems and the Levenberg-Marquardt does not handle bound constraints, problems that have both of these characteristics cannot be solved by lsqnonlin.

• lsqnonlin can solve complex-valued problems directly with the levenberg-marquardt algorithm. However, this algorithm does not accept bound constraints. For a complex problem with bound constraints, split the variables into real and imaginary parts, and use the trust-region-reflective algorithm. See Fit a Model to Complex-Valued Data.

• The preconditioner computation used in the preconditioned conjugate gradient part of the trust-region-reflective method forms JTJ (where J is the Jacobian matrix) before computing the preconditioner. Therefore, a row of J with many nonzeros, which results in a nearly dense product JTJ, can lead to a costly solution process for large problems.

• If components of x have no upper (or lower) bounds, lsqnonlin prefers that the corresponding components of ub (or lb) be set to inf (or -inf for lower bounds) as opposed to an arbitrary but very large positive (or negative for lower bounds) number.

You can use the trust-region reflective algorithm in lsqnonlin, lsqcurvefit, and fsolve with small- to medium-scale problems without computing the Jacobian in fun or providing the Jacobian sparsity pattern. (This also applies to using fmincon or fminunc without computing the Hessian or supplying the Hessian sparsity pattern.) How small is small- to medium-scale? No absolute answer is available, as it depends on the amount of virtual memory in your computer system configuration.

Suppose your problem has m equations and n unknowns. If the command J = sparse(ones(m,n)) causes an Out of memory error on your machine, then this is certainly too large a problem. If it does not result in an error, the problem might still be too large. You can find out only by running it and seeing if MATLAB runs within the amount of virtual memory available on your system.

collapse all

### Algorithms

The Levenberg-Marquardt and trust-region-reflective methods are based on the nonlinear least-squares algorithms also used in fsolve.

• The default trust-region-reflective algorithm is a subspace trust-region method and is based on the interior-reflective Newton method described in [1] and [2]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region-Reflective Least Squares.

• The Levenberg-Marquardt method is described in references [4], [5], and [6]. See Levenberg-Marquardt Method.

## References

[1] Coleman, T.F. and Y. Li. "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds." SIAM Journal on Optimization, Vol. 6, 1996, pp. 418–445.

[2] Coleman, T.F. and Y. Li. "On the Convergence of Reflective Newton Methods for Large-Scale Nonlinear Minimization Subject to Bounds." Mathematical Programming, Vol. 67, Number 2, 1994, pp. 189–224.

[3] Dennis, J. E. Jr. "Nonlinear Least-Squares." State of the Art in Numerical Analysis, ed. D. Jacobs, Academic Press, pp. 269–312.

[4] Levenberg, K. "A Method for the Solution of Certain Problems in Least-Squares." Quarterly Applied Mathematics 2, 1944, pp. 164–168.

[5] Marquardt, D. "An Algorithm for Least-squares Estimation of Nonlinear Parameters." SIAM Journal Applied Mathematics, Vol. 11, 1963, pp. 431–441.

[6] Moré, J. J. "The Levenberg-Marquardt Algorithm: Implementation and Theory." Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics 630, Springer Verlag, 1977, pp. 105–116.

[7] Moré, J. J., B. S. Garbow, and K. E. Hillstrom. User Guide for MINPACK 1. Argonne National Laboratory, Rept. ANL–80–74, 1980.

[8] Powell, M. J. D. "A Fortran Subroutine for Solving Systems of Nonlinear Algebraic Equations." Numerical Methods for Nonlinear Algebraic Equations, P. Rabinowitz, ed., Ch.7, 1970.