MATLAB Answers

How to make constant to 1 in power function in curve fitting toolbox

3 views (last 30 days)
hi,
I have tried power function in matlab code normally and another one in curve fitting toolbox. The answer are very different and I suspect the answer of curve fitting toolbox is correct one. Why there is difference between the two and is it possible to make constant a as 1 in curve fitting tool box/
I am posting the code that I have used for coding for power function.
x1=[6.71214E-05 0.00112676 0.047319082 0.142706219 0.273947664 0.431651447 0.548219473 0.615322614 0.707701444 0.80169938 0.859014989 0.91129514 0.942910695 0.960171921 0.983861934 0.99297485 1];
y1=[4.78684E-06 0.008166282 0.22229314 0.456984391 0.648856946 0.795065728 0.849101429 0.861065276 0.910476057 0.947345705 0.96719942 0.982912047 0.990964548 0.994569633 0.998431303 0.999498993 1];
plot(x1,y1,'+r'), hold on
p = polyfit(log(x1),log(y1),1)
m = p(1)
b = exp(p(2))
ezplot(@(x) b*x.^m,[x1(1) x1(end)])
and same x1, y1 have used in curve fitting toolbox. The answer for matalb code for b is 1.5 where as for curve fitting toolbox is 0.4
Please if somebody can help in this. Please provide code so that i can get same result as curve fitting code. it will be a huge help

Accepted Answer

Star Strider
Star Strider on 13 Sep 2021
Use fminsearch rather than linearised polyfit for this.
x1=[6.71214E-05 0.00112676 0.047319082 0.142706219 0.273947664 0.431651447 0.548219473 0.615322614 0.707701444 0.80169938 0.859014989 0.91129514 0.942910695 0.960171921 0.983861934 0.99297485 1];
y1=[4.78684E-06 0.008166282 0.22229314 0.456984391 0.648856946 0.795065728 0.849101429 0.861065276 0.910476057 0.947345705 0.96719942 0.982912047 0.990964548 0.994569633 0.998431303 0.999498993 1];
objfcn = @(b,x) b(1).*x.^b(2);
[B,fval] = fminsearch(@(b) norm(y1 - objfcn(b,x1)), rand(2,1))
B = 2×1
1.0281 0.4067
fval = 0.1425
xv = linspace(min(x1), max(x1), 150);
figure
plot(x1, y1, '.b')
hold on
plot(xv, objfcn(B,xv), '-r')
hold off
grid
The fminsearch function is a core MATLAB function, and significantly superior to linearising the variables, and then doing a linear fit. The problem with that is that the least-squares approach assumes that the errors are normally-distributed and additive to the fitted line. Linearising the variables using a logarithmic transformation converts the additive errors to multiplicative errors, and so the estimated parameters are not even remotely accurate.
.
  2 Comments
Star Strider
Star Strider on 13 Sep 2021
As always, my pleasure!
Change ‘objfcn’ to:
objfcn = @(b,x) x.^b;
and the fminsearch call to:
[B,fval] = fminsearch(@(b) norm(y1 - objfcn(b,x1)), rand)
What was previously ‘b(1)’ is now 1 and only ‘b’ will be estimated. (Subscripts are now also not required, since there is only one parameter.)
.

Sign in to comment.

More Answers (2)

Walter Roberson
Walter Roberson on 13 Sep 2021
x1=[6.71214E-05 0.00112676 0.047319082 0.142706219 0.273947664 0.431651447 0.548219473 0.615322614 0.707701444 0.80169938 0.859014989 0.91129514 0.942910695 0.960171921 0.983861934 0.99297485 1];
y1=[4.78684E-06 0.008166282 0.22229314 0.456984391 0.648856946 0.795065728 0.849101429 0.861065276 0.910476057 0.947345705 0.96719942 0.982912047 0.990964548 0.994569633 0.998431303 0.999498993 1];
plot(x1,y1,'+r'), hold on
syms m
residue = sum((x1.^m - y1).^2);
bestm = double(vpasolve(diff(residue,m), m))
bestm = 0.3841
ezplot(@(x) x.^bestm, [x1(1), x1(end)])

John D'Errico
John D'Errico on 13 Sep 2021
Edited: John D'Errico on 13 Sep 2021
Before you EVER make any fit to data, PLOT IT!!!!!!!! Plot your data. Then plot it in another way, until you have drained all the information you can learn from that fit.
x1=[6.71214E-05 0.00112676 0.047319082 0.142706219 0.273947664 0.431651447 0.548219473 0.615322614 0.707701444 0.80169938 0.859014989 0.91129514 0.942910695 0.960171921 0.983861934 0.99297485 1];
y1=[4.78684E-06 0.008166282 0.22229314 0.456984391 0.648856946 0.795065728 0.849101429 0.861065276 0.910476057 0.947345705 0.96719942 0.982912047 0.990964548 0.994569633 0.998431303 0.999498993 1];
plot(x1,y1,'o-')
xlabel x1
ylabel y1
title 'Direct plot of data'
grid on
And that does look vaguely like a power law curve might look, with a fractional exponent. So something like sqrt(x) might be not unreasonable, just as a wild guess.
But you have decided to fit it using a transformation of your data. Thus, if the model
y = a*x^b + noise
is what you think is the model, then by logging both sides of the equation, we will see:
log(y) = log(a) + b*log(x)
And that SHOULD be a straight line, if we plot log(y) vs log(x), IF the power law model is truly reasonable. Note that this transformation implies that the additive noise we saw in the first model is not appropriate. In fact, we would presume multiplicative noise. That can often be a reasonable presumptino when your data spans many orders of magnitide.
But the assumptions implicit in this transformation are sometimes a big IF. So what does the log log plot look like?
loglog(x1,y1,'o-')
xlabel x1
ylabel y1
title 'Log-log plot of data'
grid on
I don't really like it. That is NOT a straight line. In fact, while things are clearly dominated by that point at the bottom end, even if we look at the rest of the curve, it seems to be clearly curved over the entire domain. And that suggests why things failed you. I'll try to explain a little more deeply.
Consider the two fits you will get:
mdl = fittype('power1')
mdl =
General model Power1: mdl(a,b,x) = a*x^b
PLmdl = fit(x1',y1',mdl)
PLmdl =
General model Power1: PLmdl(x) = a*x^b Coefficients (with 95% confidence bounds): a = 1.028 (1, 1.056) b = 0.4066 (0.3574, 0.4559)
Does that fit reasonably?
plot(PLmdl)
hold on
grid on
plot(x1,y1,'ro');
hold off
xlabel x1
ylabel y1
title 'Direct fit of power law model, in the untransformed doamin'
The fit is not great, with what appears to me to be some clear lack of fit. But it looks sort of like what you may expect.
Now, contrast that to what you get from a fit in the log domain:
linmdl = fit(log(x1'),log(y1'),'poly1')
linmdl =
Linear model Poly1: linmdl(x) = p1*x + p2 Coefficients (with 95% confidence bounds): p1 = 1.088 (0.8988, 1.278) p2 = 0.4279 (-0.1438, 0.9996)
a = exp(linmdl.p2)
a = 1.5340
Just to be clear, look carefully at that bottom point. I'll overlay the linear fit on top of the plot.
plot(log(x1),log(y1),'o')
hold on
plot(linmdl)
grid on
hold off
What happens is that bottom left point now introduces a huge bias into the linear fit. We should see that the log transformation is not helping you. But the real problem is that power law model is not a very good approximation to your data. And the assumption that logging the curve will allow you to use a linear fit is invalid, if a power law model is not appropriate.
Are there better models? Probably, yes. If we look at the original data, a negative power law does imply the curve has an infinite slope at x==0. Thus remember what the sqrt curve looks like. a*x^0.4 would look very much the same.
Even a simple polynomial model would seem to fit your data much better, but perhaps a simple sum of exponentials might be a better choice.
exp2mdl = fit(x1',y1','exp2')
polymdl =
General model Exp2: polymdl(x) = a*exp(b*x) + c*exp(d*x) Coefficients (with 95% confidence bounds): a = 0.7275 (0.69, 0.7651) b = 0.327 (0.2709, 0.3832) c = -0.7201 (-0.7575, -0.6828) d = -6.143 (-6.841, -5.446)
plot(x1,y1,'o')
hold on
plot(exp2mdl)
hold off
Now, you are the only one who knows why you chose a power law model. My guess is, it looked like it would fit. That would have been a possible choice given a cursory glance at your data. But a deeper glance tells me that power law model is not correct here.
  1 Comment
shagun chaudhary
shagun chaudhary on 13 Sep 2021
Thank you so much sir for helping me and me understand this cocnept so deeply. it means a lot. Thank you so much for your effort and time.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!