fmincon optimization: is the first order optimality very sensititve to changes in the step tolerance?

10 views (last 30 days)
I use fmincon interior-point algorithm to fit parameters to a pde.
Here are my basic settings:
opts = optimoptions('fmincon', ...
'StepTolerance', 1e-12, ...
'FunctionTolerance', 1e-12, ...
'OptimalityTolerance', 1e-12, ...
'MaxIterations', 250,...
'SpecifyObjectiveGradient', true, ...
'CheckGradients', false);
lb = zeros(9,1);
ub = 4 + zeros(9,1);
Aineq = ... ; % entries have dimension 1e2
bineq = zeros(9,1);
problem = createOptimProblem( ...
params.solverName, ...
'objective', myFun, ...
'x0', startVec, ...
'lb',lb, ...
'ub',ub, ...
'Aineq', Aineq,
'bineq', bineq,
options=opts);
%create multistart object
ms = MultiStart('Display', 'iter', ...
'UseParallel', true, ...
'StartPointsToRun', 'all', ...
'FunctionTolerance', 0);
% run
run(ms, problem, myStartPoints)
Thre are nine parameters and I have lower bounds and upper bounds as well as linear inequality constraints.
I scaled the matrix Aineq by 1e2 manually such that fmincon pays more attention to feasability. I am aware that this comes with poor convergence and other drawbacks, but it proved to work quite well so far. The reason to choose those tight tolerances (1e-12) is to work around flat regions of the objective function, if any.
Using these options, I get the following output from multistart:
The solution of all 10 runs is
x = [0 0.00838947 0.0167789 0.0251684 0.0335579 0.0419473 0.0503368 0.0587263 0.0673571]
All solutions have exitflag=2 (probably because of the brutal scaling) and the same value of the objective function. Also, the first-order optimality is small.
However, run index = 4, for instance, converged to the same solution, but the first-order optimality is rather big compared to the other ones.
This gets even more visible if I relax all my tolerances (step, function, first-order-optimality tolerance) to the default value of 1e-6:
The solutions is nearly the same as before
x = [0 0.00838945 0.0167789 0.0251684 0.0335578 0.0419473 0.0503367 0.0587262 0.067357]
however, the first-order optimality is higher by several orders of magnitudes, but the solution is nearly unchanged which is also indicated by the sum of squares.
Those high optimalities make the solution less trustworthy.
How is it possible that the optimalities are so different if the sum of squares as well as the solution are practically identical?
  7 Comments
Torsten
Torsten on 27 Nov 2023
I suggest you compute the objective function near the point that MATLAB computes as optimal by changing each parameter separately while holding the other parameters constant and see what kind of curves you get.

Sign in to comment.

Accepted Answer

Matt J
Matt J on 27 Nov 2023
Edited: Matt J on 27 Nov 2023
So given the results I show, can we qualititatively say that the objective is likely to be very flat at the solution? Or something else?
Well, it basically means that a small change in x (near the stopping point) produces a large change in the gradient. The function would seem to have very high curvatures there, or possibly has a discontinuous first derivative.
  25 Comments
SA-W
SA-W on 28 Nov 2023
It must be the Hessian of the Lagrangian, not the objective function, although I guess if you only have linear constraints, they will be the same thing.
Yes, I think so too.
Do you think it makes sense to calculate correlations,etc,... with a hessian that has a condition number ~1e7?

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!