Solve linear programming problems
Linear programming solver
Finds the minimum of a problem specified by
$$\underset{x}{\mathrm{min}}{f}^{T}x\text{suchthat}\{\begin{array}{c}A\cdot x\le b,\\ Aeq\cdot x=beq,\\ lb\le x\le ub.\end{array}$$
f, x, b, beq, lb, and ub are vectors, and A and Aeq are matrices.
linprog
applies only to the solverbased approach. For a discussion
of the two optimization approaches, see First Choose ProblemBased or SolverBased Approach.
finds the minimum for x
= linprog(problem
)problem
, where
problem
is a structure described in Input Arguments.
Create the problem
structure by exporting a problem
from Optimization app, as described in Exporting Your Work. You can import a
problem
structure from an MPS file using
mpsread
. You can
also create a problem
structure from an
OptimizationProblem
object by using
prob2struct
.
Solve a simple linear program defined by linear inequalities.
For this example, use these linear inequality constraints:
$$x(1)+x(2)\le 2$$
$$x(1)+x(2)/4\le 1$$
$$x(1)x(2)\le 2$$
$$x(1)/4x(2)\le 1$$
$$x(1)x(2)\le 1$$
$$x(1)+x(2)\le 2.$$
A = [1 1 1 1/4 1 1 1/4 1 1 1 1 1]; b = [2 1 2 1 1 2];
Use the objective function $$x(1)x(2)/3$$.
f = [1 1/3];
Solve the linear program.
x = linprog(f,A,b)
Optimal solution found.
x = 2×1
0.6667
1.3333
Solve a simple linear program defined by linear inequalities and linear equalities.
For this example, use these linear inequality constraints:
$$x(1)+x(2)\le 2$$
$$x(1)+x(2)/4\le 1$$
$$x(1)x(2)\le 2$$
$$x(1)/4x(2)\le 1$$
$$x(1)x(2)\le 1$$
$$x(1)+x(2)\le 2.$$
A = [1 1 1 1/4 1 1 1/4 1 1 1 1 1]; b = [2 1 2 1 1 2];
Use the linear equality constraint $$x(1)+x(2)/4=1/2$$.
Aeq = [1 1/4]; beq = 1/2;
Use the objective function $$x(1)x(2)/3$$.
f = [1 1/3];
Solve the linear program.
x = linprog(f,A,b,Aeq,beq)
Optimal solution found.
x = 2×1
0
2
Solve a simple linear program with linear inequalities, linear equalities, and bounds.
For this example, use these linear inequality constraints:
$$x(1)+x(2)\le 2$$
$$x(1)+x(2)/4\le 1$$
$$x(1)x(2)\le 2$$
$$x(1)/4x(2)\le 1$$
$$x(1)x(2)\le 1$$
$$x(1)+x(2)\le 2.$$
A = [1 1 1 1/4 1 1 1/4 1 1 1 1 1]; b = [2 1 2 1 1 2];
Use the linear equality constraint $$x(1)+x(2)/4=1/2$$.
Aeq = [1 1/4]; beq = 1/2;
Set these bounds:
$$1\le x(1)\le 1.5$$
$$0.5\le x(2)\le 1.25.$$
lb = [1,0.5]; ub = [1.5,1.25];
Use the objective function $$x(1)x(2)/3$$.
f = [1 1/3];
Solve the linear program.
x = linprog(f,A,b,Aeq,beq,lb,ub)
Optimal solution found.
x = 2×1
0.1875
1.2500
'interiorpoint'
AlgorithmSolve a linear program using the 'interiorpoint'
algorithm.
For this example, use these linear inequality constraints:
$$x(1)+x(2)\le 2$$
$$x(1)+x(2)/4\le 1$$
$$x(1)x(2)\le 2$$
$$x(1)/4x(2)\le 1$$
$$x(1)x(2)\le 1$$
$$x(1)+x(2)\le 2.$$
A = [1 1 1 1/4 1 1 1/4 1 1 1 1 1]; b = [2 1 2 1 1 2];
Use the linear equality constraint $$x(1)+x(2)/4=1/2$$.
Aeq = [1 1/4]; beq = 1/2;
Set these bounds:
$$1\le x(1)\le 1.5$$
$$0.5\le x(2)\le 1.25.$$
lb = [1,0.5]; ub = [1.5,1.25];
Use the objective function $$x(1)x(2)/3$$.
f = [1 1/3];
Set options to use the 'interiorpoint'
algorithm.
options = optimoptions('linprog','Algorithm','interiorpoint');
Solve the linear program using the 'interiorpoint'
algorithm.
x = linprog(f,A,b,Aeq,beq,lb,ub,options)
Minimum found that satisfies the constraints. Optimization completed because the objective function is nondecreasing in feasible directions, to within the selected value of the function tolerance, and constraints are satisfied to within the selected value of the constraint tolerance.
x = 2×1
0.1875
1.2500
linprog
This example shows how to set up a problem using the problembased approach and then solve it using the solverbased approach. The problem is
$$\underset{x}{\mathrm{max}}(x+y/3)\phantom{\rule{0.5em}{0ex}}subject\phantom{\rule{0.5em}{0ex}}to\phantom{\rule{0.5em}{0ex}}\{\begin{array}{l}x+y\le 2\\ x+y/4\le 1\\ xy\le 2\\ x/4+y\ge 1\\ x+y\ge 1\\ x+y\le 2\\ x+y/4=1/2\\ 1\le x\le 1.5\\ 1/2\le y\le 1.25\end{array}$$
Create an OptimizationProblem
object named prob
to represent this problem.
x = optimvar('x','LowerBound',1,'UpperBound',1.5); y = optimvar('y','LowerBound',1/2,'UpperBound',1.25); prob = optimproblem('Objective',x + y/3,'ObjectiveSense','max'); prob.Constraints.c1 = x + y <= 2; prob.Constraints.c2 = x + y/4 <= 1; prob.Constraints.c3 = x  y <= 2; prob.Constraints.c4 = x/4 + y >= 1; prob.Constraints.c5 = x + y >= 1; prob.Constraints.c6 = x + y <= 2; prob.Constraints.c7 = x + y/4 == 1/2;
Convert the problem object to a problem structure.
problem = prob2struct(prob);
Solve the resulting problem structure.
[sol,fval,exitflag,output] = linprog(problem)
Optimal solution found.
sol = 2×1
0.1875
1.2500
fval = 0.6042
exitflag = 1
output = struct with fields:
iterations: 0
constrviolation: 0
message: 'Optimal solution found.'
algorithm: 'dualsimplex'
firstorderopt: 0
The returned fval
is negative, even though the solution components are positive. Internally, prob2struct
turns the maximization problem into a minimization problem of the negative of the objective function. See Maximizing an Objective.
Which component of sol
corresponds to which optimization variable? Examine the Variables
property of prob
.
prob.Variables
ans = struct with fields:
x: [1x1 optim.problemdef.OptimizationVariable]
y: [1x1 optim.problemdef.OptimizationVariable]
As you might expect, sol(1)
corresponds to x
, and sol(2)
corresponds to y
. See Algorithms.
Calculate the solution and objective function value for a simple linear program.
The inequality constraints are
$$x(1)+x(2)\le 2$$
$$x(1)+x(2)/4\le 1$$
$$x(1)x(2)\le 2$$
$$x(1)/4x(2)\le 1$$
$$x(1)x(2)\le 1$$
$$x(1)+x(2)\le 2.$$
A = [1 1 1 1/4 1 1 1/4 1 1 1 1 1]; b = [2 1 2 1 1 2];
The objective function is $$x(1)x(2)/3$$.
f = [1 1/3];
Solve the problem and return the objective function value.
[x,fval] = linprog(f,A,b)
Optimal solution found.
x = 2×1
0.6667
1.3333
fval = 1.1111
Obtain the exit flag and output structure to better understand the solution process and quality.
For this example, use these linear inequality constraints:
$$x(1)+x(2)\le 2$$
$$x(1)+x(2)/4\le 1$$
$$x(1)x(2)\le 2$$
$$x(1)/4x(2)\le 1$$
$$x(1)x(2)\le 1$$
$$x(1)+x(2)\le 2.$$
A = [1 1 1 1/4 1 1 1/4 1 1 1 1 1]; b = [2 1 2 1 1 2];
Use the linear equality constraint $$x(1)+x(2)/4=1/2$$.
Aeq = [1 1/4]; beq = 1/2;
Set these bounds:
$$1\le x(1)\le 1.5$$
$$0.5\le x(2)\le 1.25.$$
lb = [1,0.5]; ub = [1.5,1.25];
Use the objective function $$x(1)x(2)/3$$.
f = [1 1/3];
Set options to use the 'dualsimplex'
algorithm.
options = optimoptions('linprog','Algorithm','dualsimplex');
Solve the linear program and request the function value, exit flag, and output structure.
[x,fval,exitflag,output] = linprog(f,A,b,Aeq,beq,lb,ub,options)
Optimal solution found.
x = 2×1
0.1875
1.2500
fval = 0.6042
exitflag = 1
output = struct with fields:
iterations: 0
constrviolation: 0
message: 'Optimal solution found.'
algorithm: 'dualsimplex'
firstorderopt: 0
fval
, the objective function value, is larger than Return the Objective Function Value, because there are more constraints.
exitflag
= 1 indicates that the solution is reliable.
output.iterations
= 0 indicates that linprog
found the solution during presolve, and did not have to iterate at all.
Solve a simple linear program and examine the solution and the Lagrange multipliers.
Use the objective function
$$f(x)=5{x}_{1}4{x}_{2}6{x}_{3}.$$
f = [5; 4; 6];
Use the linear inequality constraints
$${x}_{1}{x}_{2}+{x}_{3}\le 20$$
$$3{x}_{1}+2{x}_{2}+4{x}_{3}\le 42$$
$$3{x}_{1}+2{x}_{2}\le 30.$$
A = [1 1 1 3 2 4 3 2 0]; b = [20;42;30];
Constrain all variables to be positive:
$${x}_{1}\ge 0$$
$${x}_{2}\ge 0$$
$${x}_{3}\ge 0.$$
lb = zeros(3,1);
Set Aeq
and beq
to []
, indicating that there are no linear equality constraints.
Aeq = []; beq = [];
Call linprog
, obtaining the Lagrange multipliers.
[x,fval,exitflag,output,lambda] = linprog(f,A,b,Aeq,beq,lb);
Optimal solution found.
Examine the solution and Lagrange multipliers.
x,lambda.ineqlin,lambda.lower
x = 3×1
0
15.0000
3.0000
ans = 3×1
0
1.5000
0.5000
ans = 3×1
1.0000
0
0
lambda.ineqlin
is nonzero for the second and third components of x
. This indicates that the second and third linear inequality constraints are satisfied with equalities:
$$3{x}_{1}+2{x}_{2}+4{x}_{3}=42$$
$$3{x}_{1}+2{x}_{2}=30.$$
Check that this is true:
A*x
ans = 3×1
12.0000
42.0000
30.0000
lambda.lower
is nonzero for the first component of x
. This indicates that x(1)
is at its lower bound of 0.
f
— Coefficient vectorCoefficient vector, specified as a real vector or real array.
The coefficient vector represents the objective function f'*x
.
The notation assumes that f
is a column vector,
but you are free to use a row vector or array. Internally, linprog
converts f
to
the column vector f(:)
.
Example: f = [1,3,5,6]
Data Types: double
A
— Linear inequality constraintsLinear inequality constraints, specified as a real matrix. A
is
an M
byN
matrix, where M
is
the number of inequalities, and N
is the number
of variables (length of f
). For large problems,
pass A
as a sparse matrix.
A
encodes the M
linear
inequalities
A*x <= b
,
where x
is the column vector of N
variables x(:)
,
and b
is a column vector with M
elements.
For example, to specify
x_{1} + 2x_{2} ≤
10
3x_{1} +
4x_{2} ≤ 20
5x_{1} +
6x_{2} ≤ 30,
give these constraints:
A = [1,2;3,4;5,6]; b = [10;20;30];
Example: To specify that the xcomponents add up to 1 or less,
take A = ones(1,N)
and b = 1
Data Types: double
Aeq
— Linear equality constraintsLinear equality constraints, specified as a real matrix. Aeq
is
an Me
byN
matrix, where Me
is
the number of equalities, and N
is the number of
variables (length of f
). For large problems,
pass Aeq
as a sparse matrix.
Aeq
encodes the Me
linear
equalities
Aeq*x = beq
,
where x
is the column vector of N
variables x(:)
,
and beq
is a column vector with Me
elements.
For example, to specify
x_{1} + 2x_{2} +
3x_{3} = 10
2x_{1} +
4x_{2} + x_{3} =
20,
give these constraints:
Aeq = [1,2,3;2,4,1]; beq = [10;20];
Example: To specify that the xcomponents sum to 1, take Aeq
= ones(1,N)
and beq = 1
Data Types: double
b
— Linear inequality constraintsLinear inequality constraints, specified as a real vector. b
is
an M
element vector related to the A
matrix.
If you pass b
as a row vector, solvers internally
convert b
to the column vector b(:)
.
For large problems, pass b
as a sparse vector.
b
encodes the M
linear
inequalities
A*x <= b
,
where x
is the column vector of N
variables x(:)
,
and A
is a matrix of size M
byN
.
For example, to specify
x_{1} + 2x_{2} ≤
10
3x_{1} +
4x_{2} ≤ 20
5x_{1} +
6x_{2} ≤ 30,
enter these constraints:
A = [1,2;3,4;5,6]; b = [10;20;30];
Example: To specify that the x components sum to 1 or less, use A =
ones(1,N)
and b = 1
.
Data Types: double
beq
— Linear equality constraintsLinear equality constraints, specified as a real vector. beq
is
an Me
element vector related to the Aeq
matrix.
If you pass beq
as a row vector, solvers internally
convert beq
to the column vector beq(:)
.
For large problems, pass beq
as a sparse vector.
beq
encodes the Me
linear
equalities
Aeq*x = beq
,
where x
is the column vector of N
variables
x(:)
, and Aeq
is a matrix of size
Me
byN
.
For example, to specify
x_{1} + 2x_{2} +
3x_{3} = 10
2x_{1} +
4x_{2} + x_{3} =
20,
enter these constraints:
Aeq = [1,2,3;2,4,1]; beq = [10;20];
Example: To specify that the x components sum to 1, use Aeq = ones(1,N)
and
beq = 1
.
Data Types: double
lb
— Lower boundsLower bounds, specified as a real vector or real array. If the
length of f
is equal to that of lb
,
then lb
specifies that
x(i) >= lb(i)
for all i
.
If numel(lb) < numel(f)
, then lb
specifies
that
x(i) >= lb(i)
for 1 <=
i <= numel(lb)
.
In this case, solvers issue a warning.
Example: To specify that all xcomponents are positive, lb
= zeros(size(f))
Data Types: double
ub
— Upper boundsUpper bounds, specified as a real vector or real array. If the
length of f
is equal to that of ub
,
then ub
specifies that
x(i) <= ub(i)
for all i
.
If numel(ub) < numel(f)
, then ub
specifies
that
x(i) <= ub(i)
for 1 <=
i <= numel(ub)
.
In this case, solvers issue a warning.
Example: To specify that all xcomponents are less than one, ub
= ones(size(f))
Data Types: double
options
— Optimization optionsoptimoptions
 structure as optimset
returnsOptimization options, specified as the output of optimoptions
or
a structure as optimset
returns.
Some options apply to all algorithms, and others are relevant for particular algorithms. See Optimization Options Reference for detailed information.
Some options are absent from the
optimoptions
display. These options appear in italics in the following
table. For details, see View Options.
All Algorithms  
Algorithm  Choose the optimization algorithm:
For information on choosing the algorithm, see Linear Programming Algorithms. 
Diagnostics  Display diagnostic information
about the function to be minimized or solved. Choose 
 Level of display (see Iterative Display):

 Maximum number of iterations allowed, a positive integer. The default is:
See Tolerances and Stopping Criteria and Iterations and Function Counts. For 
 Termination tolerance on the dual feasibility, a positive scalar. The default is:
For 
interiorpoint Algorithm  
ConstraintTolerance  Feasibility tolerance for constraints, a scalar from For 
Preprocess  Level of LP preprocessing prior to algorithm iterations.
Specify 
DualSimplex Algorithm  
ConstraintTolerance  Feasibility tolerance for constraints, a scalar from For 
MaxTime  Maximum amount of time in seconds that the algorithm
runs. The default is 
Preprocess  Level of LP preprocessing prior to dual simplex algorithm
iterations. Specify 
Example: options = optimoptions('linprog','Algorithm','interiorpoint','Display','iter')
problem
— Problem structureProblem structure, specified as a structure with the following fields.
Field Name  Entry 

 Linear objective function vector f 
 Matrix for linear inequality constraints 
 Vector for linear inequality constraints 
 Matrix for linear equality constraints 
 Vector for linear equality constraints 
lb  Vector of lower bounds 
ub  Vector of upper bounds 
 'linprog' 
 Options created with optimoptions 
You must supply at least the solver
field
in the problem
structure.
The simplest way to obtain a problem
structure
is to export the problem from the Optimization app.
Data Types: struct
x
— SolutionSolution, returned as a real vector or real array. The size
of x
is the same as the size of f
.
fval
— Objective function value at the solutionObjective function value at the solution, returned as a real
number. Generally, fval
= f'*x
.
exitflag
— Reason linprog
stoppedReason linprog
stopped, returned as an integer.
 The solution is feasible with respect
to the relative

 Function converged to a solution 
 Number of iterations exceeded 
 No feasible point was found. 
 Problem is unbounded. 


 Both primal and dual problems are infeasible. 
 Search direction became too small. No further progress could be made. 
 Solver lost feasibility. 
Exitflags 3
and 9
relate
to solutions that have large infeasibilities. These usually arise from linear constraint
matrices that have large condition number, or problems that have large solution components. To
correct these issues, try to scale the coefficient matrices, eliminate redundant linear
constraints, or give tighter bounds on the variables.
output
— Information about the optimization processInformation about the optimization process, returned as a structure with these fields.
iterations  Number of iterations 
algorithm  Optimization algorithm used 
cgiterations  0 (interiorpoint algorithm only, included for backward compatibility) 
message  Exit message 
constrviolation  Maximum of constraint functions 
firstorderopt  Firstorder optimality measure 
lambda
— Lagrange multipliers at the solutionLagrange multipliers at the solution, returned as a structure with these fields.
The Lagrange multipliers for linear constraints
satisfy this equation with
length(f)
components:
$$\text{f}+{\text{A}}^{T}{\lambda}_{\text{ineqlin}}\text{\hspace{0.05em}}+{\text{Aeq}}^{T}{\lambda}_{\text{eqlin}}+{\lambda}_{\text{upper}}{\lambda}_{\text{lower}}=0,$$
based on the Lagrangian
$${\text{f}}^{T}x+{\lambda}_{\text{ineqlin}}^{T}\left(\text{A}x\text{b}\right)\text{\hspace{0.05em}}+{\lambda}_{\text{eqlin}}^{T}\left(\text{Aeq}\text{\hspace{0.17em}}x\text{beq}\right)+{\lambda}_{\text{upper}}^{T}\left(x\text{ub}\right)+{\lambda}_{\text{lower}}^{T}\left(\text{lb}x\right).$$
This sign convention matches that of nonlinear solvers
(see Constrained Optimality Theory). However, this
sign is the opposite of the sign in much linear
programming literature, so a
linprog
Lagrange multiplier
is the negative of the associated "shadow price."
For a description, see DualSimplex Algorithm.
The 'interiorpointlegacy'
method is based
on LIPSOL (Linear Interior Point Solver, [3]), which is a variant of Mehrotra's predictorcorrector
algorithm [2], a primaldual interiorpoint
method. A number of preprocessing steps occur before the algorithm
begins to iterate. See InteriorPointLegacy Linear Programming.
The first stage of the algorithm might involve some preprocessing
of the constraints (see InteriorPointLegacy Linear Programming). Several conditions might
cause linprog
to exit with an infeasibility message.
In each case, linprog
returns a negative exitflag
,
indicating to indicate failure.
If a row of all zeros is detected in Aeq
,
but the corresponding element of beq
is not zero,
then the exit message is
Exiting due to infeasibility: An allzero row in the constraint matrix does not have a zero in corresponding righthandside entry.
If one of the elements of x
is
found not to be bounded below, then the exit message is
Exiting due to infeasibility: Objective f'*x is unbounded below.
If one of the rows of Aeq
has only
one nonzero element, then the associated value in x
is
called a singleton variable. In this case, the
value of that component of x
can be computed from Aeq
and beq
.
If the value computed violates another constraint, then the exit message
is
Exiting due to infeasibility: Singleton variables in equality constraints are not feasible.
If the singleton variable can be solved for, but the solution violates the upper or lower bounds, then the exit message is
Exiting due to infeasibility: Singleton variables in the equality constraints are not within bounds.
The preprocessing steps are cumulative. For example, even if your constraint matrix does not have a row of all zeros to begin with, other preprocessing steps can cause such a row to occur.
When the preprocessing finishes, the iterative part of the algorithm begins until the stopping criteria are met. (For more information about residuals, the primal problem, the dual problem, and the related stopping criteria, see InteriorPointLegacy Linear Programming.) If the residuals are growing instead of getting smaller, or the residuals are neither growing nor shrinking, one of the two following termination messages is displayed, respectively,
One or more of the residuals, duality gap, or total relative error has grown 100000 times greater than its minimum value so far:
or
One or more of the residuals, duality gap, or total relative error has stalled:
After one of these messages is displayed, it is followed by one of the following messages indicating that the dual, the primal, or both appear to be infeasible.
The dual appears to be infeasible (and the
primal unbounded). (The primal residual < OptimalityTolerance.)
The primal appears to be infeasible (and
the dual unbounded). (The dual residual < OptimalityTolerance.)
The dual appears to be infeasible (and the
primal unbounded) since the dual residual > sqrt(OptimalityTolerance).
(The primal residual < 10*OptimalityTolerance.)
The primal appears to be infeasible (and
the dual unbounded) since the primal residual > sqrt(OptimalityTolerance).
(The dual residual < 10*OptimalityTolerance.)
The dual appears to be infeasible
and the primal unbounded since the primal objective < 1e+10 and
the dual objective < 1e+6.
The primal appears to be
infeasible and the dual unbounded since the dual objective > 1e+10
and the primal objective > 1e+6.
Both the primal and the dual
appear to be infeasible.
For example, the primal (objective) can be unbounded and the primal residual, which is a measure of primal constraint satisfaction, can be small.
The 'interiorpoint'
algorithm is similar
to 'interiorpointlegacy'
, but with a more efficient
factorization routine, and with different preprocessing. See InteriorPoint linprog Algorithm.
[1] Dantzig, G.B., A. Orden, and P. Wolfe. “Generalized Simplex Method for Minimizing a Linear Form Under Linear Inequality Restraints.” Pacific Journal Math., Vol. 5, 1955, pp. 183–195.
[2] Mehrotra, S. “On the Implementation of a PrimalDual Interior Point Method.” SIAM Journal on Optimization, Vol. 2, 1992, pp. 575–601.
[3] Zhang, Y., “Solving LargeScale Linear Programs by InteriorPoint Methods Under the MATLAB Environment.” Technical Report TR9601, Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore, MD, July 1995.
intlinprog
 mpsread
 optimoptions
 prob2struct
 quadprog
A modified version of this example exists on your system. Do you want to open this version instead?
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
Select web siteYou can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.