Constraints to a Second Order Curve Fit

11 views (last 30 days)
Jahari
Jahari on 16 May 2025
Commented: John D'Errico on 17 May 2025
Given this set of data, is it possible to perform a 2nd order curve fit with the set constraint that the leading coefficient must be positive? Using an unconstrained curve fit (both "polyfit" and "fit" were used), the data produces a curve with a rather small negative leading coefficient. For reference, the data is as follows:
x = 150, 190, 400, 330, 115, 494
y = 1537, 1784, 3438, 2943, 1175, 4203
The given outputs using both methods largely agreed, as shown below:
fit_eq =
-0.0007 8.3119 255.8074
eq =
Linear model Poly2:
eq(x) = p1*x^2 + p2*x + p3
Coefficients (with 95% confidence bounds):
p1 = -0.0007088 (-0.003588, 0.00217)
p2 = 8.312 (6.51, 10.11)
p3 = 255.8 (25.01, 486.6)

Answers (4)

John D'Errico
John D'Errico on 16 May 2025
You want to constrain the quadratic coefficient to be positive. But your data wants it to be a little bit negative. Effectively, the result will always be zero. That is, the only quadratic polynomial that will satisfy your requirement is the trivial one, with a zero quadratic coefficient, so a linear polynomial.
And, yes, you COULD use a tool like lsqlin, or lots of other ways. For example, fit can do it. But polyfit can do it, even more trivially.
x = [150, 190, 400, 330, 115, 494];
y = [1537, 1784, 3438, 2943, 1175, 4203];
C1 = polyfit(x,y,1)
C1 = 1×2
7.8960 303.7561
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
If you really, desperately want or need a quadratic, then append a zero.
C2 = [0,C1]
C2 = 1×3
0 7.8960 303.7561
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
This is the best fitting "quadratic" polynomial that satisfies your goals.
Does it agree with other tools? Of course. It must.
mdl = fittype('a*x^2 + b*x + c',indep = 'x');
fit(x',y',mdl,Lower = [0 -inf -inf])
Warning: Start point not provided, choosing random start point.
ans =
General model: ans(x) = a*x^2 + b*x + c Coefficients (with 95% confidence bounds): a = 4.189e-14 (fixed at bound) b = 7.896 (7.583, 8.209) c = 303.8 (206, 401.5)
Note that fit did not return an exact zero, but one within floating point trash for the quadratic term.
  2 Comments
Matt J
Matt J on 16 May 2025
Edited: Matt J on 16 May 2025
OK, but this was only possible because you had a single constraint a>=0 which had to be active at the constrained optimum, right? If there were multiple constraints, e.g., a>=0, b>=0, it might not be as easy to know which constraints are active at the solution.
For example, this figure shows a 2D quadratic minimization scenario where the unconstrained minimum is in the fourth quadrant a<0, b<0, but the constrained solution is at b>0. So, by analogy with the polynomial fitting problem, one might try to reason that because "a and b want to be negative" in the unconstrained fit, the constrained solution must have a=b=0. But that wouldn't be true here.
John D'Errico
John D'Errico on 17 May 2025
Yes, exactly. In the case of multiple constraints, one now needs to use a constrained solver, which can resolve the issue. But the question explicitly stated one constraint only. So I wanted to make it clear that given one simple bound constraint, then the answer is easy.

Sign in to comment.


Torsten
Torsten on 16 May 2025
Edited: Torsten on 16 May 2025
x = [150, 190, 400, 330, 115, 494].';
y = [1537, 1784, 3438, 2943, 1175, 4203].';
C = [x.^2,x,ones(numel(x),1)];
d = y;
lb = [0;-Inf;-Inf];
ub = [Inf;Inf;Inf];
format longg
options = optimoptions('lsqlin','ConstraintTolerance',1e-20,'OptimalityTolerance',1e-20);
p = lsqlin(C,d,[],[],[],[],lb,ub,[],options)
Minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
p = 3×1
8.57603954744474e-38 7.89604727892389 303.756103114464
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
hold on
plot(x,y,'o')
plot(x,p(1)*x.^2+p(2)*x+p(3))
hold off
grid on

Sam Chak
Sam Chak on 16 May 2025
This is merely a workaround that involves manually forcing the sign of the leading coefficient to be smallest positive. It does not represent a MATLAB-esque solution.
x = [150, 190, 400, 330, 115, 494];
y = [1537, 1784, 3438, 2943, 1175, 4203];
A = [x', y'];
B = sort(A, 1);
p2 = polyfit(B(:,1), B(:,2), 2) % polynomial of degree 2
p2 = 1×3
-0.0006 8.2773 259.3045
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
p1 = polyfit(B(:,1), B(:,2), 1) % polynomial of degree 1
p1 = 1×2
7.8960 303.7561
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
xn = linspace(min(x), max(x), 1001);
y2 = polyval(p2, xn);
y1 = eps*xn.^2 + p1(1)*xn + p1(2);
plot(B(:,1), B(:,2), ':o'), hold on
plot(xn, y2), grid on
plot(xn, y1), hold off
legend('data', '-poly2 fit', '+poly2 fit', 'location', 'east')
xlabel('x'), ylabel('y'), title('6-point data')

Matt J
Matt J on 16 May 2025
Edited: Matt J on 16 May 2025
It probably makes more sense just to do a linear fit. For this data, at least, a 2nd order fit doesn't make too much sense. Regardless, here's a way to enforce the constraints with fminsearch (which doesn't require the Optimization Toolbox, in case that matters). I do an -norm fit, because why not.
x = [150, 190, 400, 330, 115, 494].';
y = [1537, 1784, 3438, 2943, 1175, 4203].';
C = [x.^2,x,ones(numel(x),1)];
d = y;
fun=@(p) norm( C*[p(1).^2;p(2);p(3)]-d , 1);
p0=C\d; %initial guess from unconstrained fit
p=fminsearch(fun,p0);
p(1)=p(1).^2
p = 3×1
0.0000 7.9569 272.1647
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
plot(x,y,'o', x,p(1)*x.^2+p(2)*x+p(3))

Categories

Find more on Get Started with Curve Fitting Toolbox in Help Center and File Exchange

Products


Release

R2024b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!