How can I reduce the "method error" with my central-difference method, when estimating a Jacobian matrix of partial derivatives?

22 views (last 30 days)
Hi there!
Recently I wrote a central-difference method to approximate a Jacobian matrix of partial derivatives (thanks again Matt J., Torsten, and John D'Errico!), and then I computed its numerical eigenvalues using Matlab's eig( ) function.
Currently (before I get into stability analysis and exploration of parameter spaces), I am inspecting pairs of numerical Jacobian matrices and their corresponding sets of eigenvalues, for successively smaller finite-differencing step sizes $h$, e.g. using step-size h = 1e-8 for the first Jacobian, and h = 1e-9 for the second Jacobian. And, I am finding that:
  1. the two Jacobian's entries agree for only up to 2, 3, at most 4 digits after the decimal point, and
  2. the sets of eigenvalues agree for only up to 2, 3, at most 4 digits after the decimal point.
Does this mean that my "method error" is too unacceptably large?
How can I reduce this method error, so that the Jacobians and corresponding eigenvalues agree for up to a greater number of digits after the decimal point, as I make the step-size h successively smaller?
(I'm aware of a very popular code on the File Exchange, jacobianest( ), but for now I'd like to work on this material a bit more before using that code.)
Thanks in advance,
  3 Comments
Noob
Noob on 5 May 2025
Edited: Noob on 5 May 2025
The two Jacobians being compared are both numerical -- for step-sizes h = 1e-8, and 1e-9. I've updated my question to make this a bit more clear. I don't have a symbolic, exact Jacobian to compare to, though.
Matt J
Matt J on 5 May 2025
There isn't enough to go on. It's possible the function isn't even differentiable at the point you are testing.

Sign in to comment.

Accepted Answer

Matt J
Matt J on 5 May 2025
Edited: Matt J on 5 May 2025
Try making the step size adaptive to the evaluation points. E.g.
f=@(x) x.^3; %function
g=@(x) 3*x.^2;%true derivative
G=@(x,h) (f(x+h)-f(x-h))/2/h; %numerical approx
Error=@(x,h) abs(G(x,h)/g(x)-1)*100; %percent error
x0=1e4; %evaluation point
Error(x0,1e-8)
ans = 0.0081
Error(x0,1e-9)
ans = 0.0570
Error(x0,1e6*eps(x0)) %adaptive step size
ans = 8.0118e-06
  8 Comments
Noob
Noob on 6 May 2025
Edited: Noob on 6 May 2025
Hi @Matt J Thanks so much for your answer!! For now I've been overruled, with another task I'm focusing on. I'd love to hear your thoughts on it if you had a a few minutes. I'm studying an error matrix (you brought up a good point my not having a ground truth Jacobian to compare to, and I'm not sure how to resolve that issue), and plotting the error vs. the step-size h. I'm not sure what's the best way to do it, but I am guessing that computing the 2-norm of the error matrix, using simply the norm( ) function should do the trick. The code is slightly tricky for me, namely, some confusion about indices. I have two for-loops and an if-statement. I'm going to post my code in a separate question in order to not clutter up this question here.
Thanks again!!

Sign in to comment.

More Answers (1)

John D'Errico
John D'Errico on 6 May 2025
Edited: John D'Errico on 6 May 2025
This is a complex question, with possibly no perfect answer.
For example, suppose you have a Newton method, that employs a finite difference derivative. Does it matter that the derivative estimate is not perfect? Not really! While an imperfect estimate of the derivative may force the solver to take a few more steps, it will probably still converge. In fact, most solvers in MATLAB use finite difference approximations for the derivaitve, and it is often the case that no problem is seen in convergence. Anyway, no floating point version of a derivative as a floating point number will ever be truly exact. You get down into floating point trach, and that is as good as you can do.
Next, you need to consider that finite difference approximations often have problems in both directions. What do I mean by this? For example, compute an approximation to the derivative of exp(x), at x == 1. I'll use a forward difference, just to make my point.
F = @exp;
x0 = 1;
dxlist = 10.^(-15:1);
Fprime = (F(x0 + dxlist) - F(x0))./dxlist;
Very simple, yes, I know. But it will make my point.
Fptrue = exp(1);
% I'll compute the relative error in the derivative, which is sort of
% irrelevant, sicne in this case, relative error and absolute error will be
% the same.
Fperr = abs((Fprime - Fptrue)./Fptrue);
loglog(dxlist,Fperr,'o-')
title 'Forward difference error'
grid on
Wha happened? As dx gets smaller, the error in the derivative estimate gets small. GREAT! But below roughly 1e-8, things start to go to hell. What is happening here? In this case, we are seeing two different sources of the error. For large values of the stride, this is just the error of approximation of the crappy forward first difference. It is a low order method, so you get crap.
But for small values of the stride, now we have subtractive cancellation issues. That is, when the difference F(x0+dx)-F(x0) gets really small, because the two numbers are so close, now you start to see floating point trash intrude. It is simply impossible to use that forward first difference in double precision arithmetic and do any better. If you were using a quad precision tool, for example, now things would change. But MATLAB does not offer quad precision.
How can you improve this? Maybe we can use a higher order method! For example, I'll use a 4th order method. This will surely be even better than the classic central difference.
FprimeC = (F(x0 - 2*dxlist) -8*F(x0 - dxlist) + 8*F(x0 + dxlist) - F(x0 + 2*dxlist))./(12*dxlist);
FperrC = abs((FprimeC - Fptrue)./Fptrue);
loglog(dxlist,FperrC,'o-')
title 'Central difference error'
grid on
But what happened there? The error goes towards zero faster, as the stride gets small, So the slope on the right hand side is higher. But there is a point where things again start to go to hell. Now we are adding and subtracting 4 numbers, and two of those terms have a larger coefficient out front. This actually exacerbates the subtractive cancellation problem.
Gosh. That seems to be a problem. We can't seem to win. Well, actually, you can do a little better.
format long g
[dest,derr] = derivest(@exp,1)
dest =
2.71828182845905
derr =
5.93387734771279e-14 % what derivest thinks its error might be
dest - exp(1) % derr was a little conservative
ans =
8.88178419700125e-15
So derivest (my code, as found on the file exchange) actually does pretty well. Why did it succeed? It starts with a sequence of finite difference approximations at various step sizes, then it uses a Richardson extrapolation to increase the order, based on a sequence of estimates. You can think of Richardson extrapolation as a limit process, where it effectively takes the limit of the sequence of estimates, as those strides would go asymptotically to zero. In the end, derivest actually got almost full floating point precision for the first derivative.
You might ask why does Richardson extrapolation improve things? The best answer I can offer is it avoids some of the massive subtractive cancellation issues. At least, it reduces them. And that allows it to give a higher accuracy, while staying in double precision. At the same time, if you look carefully at the derivest code, you would see I went to some lengths to achieve the highest accuracy.
No matter what you do however, you will have problems if you are looking for really, really good estimates of a derivative. The problem is, numerical differentiation is actually an ill-posed probblem of sorts. You can describe it as version of a first kind integral problem, a classic "ill-posed" problem. And that means it will be highly sensitive to noise in your data, amplifying any noise it sees, even down in those least significant digits.
And all of this above is just for a simple first derivative. You are worrying about a Jacobian, and how any noise in the derivatives there it will influence your computation of eigenvalues. Sigh. I did say it can get complicated.

Products


Release

R2024b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!