Main Content

This example shows how to solve a nonlinear least-squares problem in two ways. The example first solves the problem without using a Jacobian function. Then it shows how to include a Jacobian, and illustrates the resulting improved efficiency.

The problem has 10 terms with two unknowns: find *x*, a two-dimensional vector, that minimizes

$$\sum _{k=1}^{10}{\left(2+2k-{e}^{k{x}_{1}}-{e}^{k{x}_{2}}\right)}^{2},$$

starting at the point `x0 = [0.3,0.4]`

.

Because `lsqnonlin`

assumes that the sum of squares is not explicitly formed in the user function, the function passed to `lsqnonlin`

must compute the vector-valued function

$${F}_{k}(x)=2+2k-{e}^{k{x}_{1}}-{e}^{k{x}_{2}},$$

for *k* = 1 to 10 (that is, *F* must have 10 components).

The helper function `myfun`

defined at the end of this example implements the vector-valued objective function with no derivative information. Solve the minimization starting from the point `x0`

.

x0 = [0.3,0.4]; % Starting guess [x,resnorm,res,eflag,output] = lsqnonlin(@myfun,x0); % Invoke optimizer

Local minimum possible. lsqnonlin stopped because the size of the current step is less than the value of the step size tolerance.

Examine the solution and number of function evaluations.

disp(x)

0.2578 0.2578

disp(resnorm)

124.3622

disp(output.funcCount)

72

The objective function is simple enough that you can calculate its Jacobian. Following the definition in Jacobians of Vector Functions, a Jacobian function represents the matrix

$${J}_{kj}(x)=\frac{\partial {F}_{k}(x)}{\partial {x}_{j}}.$$

Here, $${F}_{k}(x)$$ is the *k*th component of the objective function. This example has

$${F}_{k}(x)=2+2k-{e}^{k{x}_{1}}-{e}^{k{x}_{2}},$$

so

$$\begin{array}{l}{J}_{k1}(x)=-k{e}^{k{x}_{1}}\\ {J}_{k2}(x)=-k{e}^{k{x}_{2}}.\end{array}$$

The helper function `myfun2`

defined at the end of this example implements the objective function with the Jacobian. Set options so the solver uses the Jacobian.

`opts = optimoptions(@lsqnonlin,'SpecifyObjectiveGradient',true);`

Run the solver.

```
lb = []; % No bounds
ub = [];
[x2,resnorm2,res2,eflag2,output2] = lsqnonlin(@myfun2,x0,lb,ub,opts);
```

Local minimum possible. lsqnonlin stopped because the size of the current step is less than the value of the step size tolerance.

The solution is the same as the previous solution.

disp(x2)

0.2578 0.2578

disp(resnorm2)

124.3622

The advantage of using a Jacobian is that the solver takes many fewer function evaluations.

disp(output2.funcCount)

24

This code creates the `myfun`

helper function.

function F = myfun(x) k = 1:10; F = 2 + 2*k-exp(k*x(1))-exp(k*x(2)); end

This code creates the `myfun2`

helper function.

function [F,J] = myfun2(x) k = 1:10; F = 2 + 2*k-exp(k*x(1))-exp(k*x(2)); if nargout > 1 J = zeros(10,2); J(k,1) = -k.*exp(k*x(1)); J(k,2) = -k.*exp(k*x(2)); end end