# Nonlinear Constraints with Gradients

This example shows how to solve a nonlinear problem with nonlinear constraints using derivative information.

Ordinarily, minimization routines use numerical gradients calculated by finite-difference approximation. This procedure systematically perturbs each variable in order to calculate function and constraint partial derivatives. Alternatively, you can provide a function to compute partial derivatives analytically. Typically, when you provide derivative information, solvers work more accurately and efficiently.

### Objective Function and Nonlinear Constraint

The problem is to solve

$$\underset{x}{\mathrm{min}}f(x)={e}^{{x}_{1}}\left(4{x}_{1}^{2}+2{x}_{2}^{2}+4{x}_{1}{x}_{2}+2{x}_{2}+1\right),$$

subject to the constraints

$$\begin{array}{l}{x}_{1}{x}_{2}-{x}_{1}-{x}_{2}\le -1.5\\ {x}_{1}{x}_{2}\ge -10.\end{array}$$

Because the `fmincon`

solver expects the constraints to be written in the form $$c(x)\le 0$$, write your constraint function to return the following value:

$\mathit{c}\left(\mathit{x}\right)=\left[\begin{array}{c}{\mathit{x}}_{1}{\mathit{x}}_{2}-{\mathit{x}}_{1}-{\mathit{x}}_{2}+1.5\\ -10-{\mathit{x}}_{1}{\mathit{x}}_{2}\end{array}\right]$.

### Objective Function with Gradient

The objective function is

$$f(x)={e}^{{x}_{1}}\left(4{x}_{1}^{2}+2{x}_{2}^{2}+4{x}_{1}{x}_{2}+2{x}_{2}+1\right)$$.

Compute the gradient of $$f(x)$$ with respect to the variables $${x}_{1}$$ and $${x}_{2}$$.

$\nabla \mathit{f}\left(\mathit{x}\right)=\left[\begin{array}{c}\mathit{f}\left(\mathit{x}\right)+\mathrm{exp}\left({\mathit{x}}_{1}\right)\left(8{\mathit{x}}_{1}+4{\mathit{x}}_{2}\right)\\ \mathrm{exp}\left({\mathit{x}}_{1}\right)\left(4{\mathit{x}}_{1}+4{\mathit{x}}_{2}+2\right)\end{array}\right]$.

The `objfungrad`

helper function at the end of this example returns both the objective function $$f(x)$$ and its gradient in the second output `gradf`

. Set `@objfungrad`

as the objective.

fun = @objfungrad;

### Constraint Function with Gradient

The helper function `confungrad`

is the nonlinear constraint function; it appears at the end of this example.

The derivative information for the inequality constraint has each column correspond to one constraint. In other words, the gradient of the constraints is in the following format:

$$\left[\begin{array}{cc}\frac{\partial {c}_{1}}{\partial {x}_{1}}& \frac{\partial {c}_{2}}{\partial {x}_{1}}\\ \frac{\partial {c}_{1}}{\partial {x}_{2}}& \frac{\partial {c}_{2}}{\partial {x}_{2}}\end{array}\right]=\left[\begin{array}{cc}{x}_{2}-1& -{x}_{2}\\ {x}_{1}-1& -{x}_{1}\end{array}\right].$$

Set `@confungrad`

as the nonlinear constraint function.

nonlcon = @confungrad;

### Set Options to Use Derivative Information

Indicate to the `fmincon`

solver that the objective and constraint functions provide derivative information. To do so, use `optimoptions`

to set the `SpecifyObjectiveGradient`

and `SpecifyConstraintGradient`

option values to `true`

.

options = optimoptions('fmincon',... 'SpecifyObjectiveGradient',true,'SpecifyConstraintGradient',true);

### Solve Problem

Set the initial point to `[-1,1]`

.

x0 = [-1,1];

The problem has no bounds or linear constraints, so set those argument values to `[]`

.

A = []; b = []; Aeq = []; beq = []; lb = []; ub = [];

Call `fmincon`

to solve the problem.

[x,fval] = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)

Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.

`x = `*1×2*
-9.5473 1.0474

fval = 0.0236

The solution is the same as in the example Nonlinear Inequality Constraints, which solves the problem without using derivative information. The advantage of using derivatives is that solving the problem takes fewer function evaluations while gaining robustness, although this advantage is not obvious in this example. Using even more derivative information, as in fmincon Interior-Point Algorithm with Analytic Hessian, gives even more benefit, such as fewer solver iterations.

### Helper Functions

This code creates the `objfungrad`

helper function.

function [f,gradf] = objfungrad(x) f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1); % Gradient of the objective function: if nargout > 1 gradf = [ f + exp(x(1)) * (8*x(1) + 4*x(2)), exp(x(1))*(4*x(1)+4*x(2)+2)]; end end

This code creates the `confungrad`

helper function.

function [c,ceq,DC,DCeq] = confungrad(x) c(1) = 1.5 + x(1) * x(2) - x(1) - x(2); % Inequality constraints c(2) = -x(1) * x(2)-10; % No nonlinear equality constraints ceq=[]; % Gradient of the constraints: if nargout > 2 DC= [x(2)-1, -x(2); x(1)-1, -x(1)]; DCeq = []; end end