Analyzing errors of a numerical Jacobian matrix of partial derivatives, as the finite-differencing step-size gets successively smaller,

16 views (last 30 days)
Hi there!
Recently, I learned to write a central difference method to approximate a Jacobian matrix of partial derivatives for my equations of interest. I am currently trying to study the method error. To do this, I am using "cell arrays", and I populate these with a numerical Jacobian corresponding to a particular step-size h. As the step-size h successively gets smaller, I compute the difference between two successive Jacobians, and I call this an error matrix. I compute 2-norm of this error matrix, using the norm( ) function. And finally, I make a loglog plot of the matrix norms vs. the step-size h.
Is this a good / correct way to study the method error?
Is there a better way to do it?
Thanks in advance!
%% linear stability analysis
% h = 1e-8; % finite-differencing step-size
e = zeros(6,1); % 6 x 1 zero vector
NumericalJacobian = nan(6); % 6x6 matrix of NaNs, to be filled in with partial derivatives
n = 6;
J = cell(n,1); % use cell arrays to store Jacobian matrices
figure(3)
for K = 1:12 % loop index for the finite-differencing step-size
h = 10^(-K); % finite-difference step-size gets successively smaller
for i = 1:6 % loop over i, from 1 to 6, for computing partial derivatives
e(i) = 1; % temporarily set the ith-component of e = 1
CDM = ( zdot(z_0 + h*e) - zdot(z_0 - h*e) ) / (2*h); % write a central-difference method for computing the Jacobian
% of zdot with respect to the ith partial derivative
NumericalJacobian(:,i) = CDM; % store the ith partial derivatives in the ith column of the Jacobian matrix
e(i) = 0; % reset the ith-component of e = 0
J{K} = NumericalJacobian; % store the K-th Jacobian matrix in a 6x6 cell array
end
if K > 1 % when K > 1, we want to compute the successive differences in matrix entries
norm_error_matrix = norm(J{K} - J{K-1}); % compute the 2-norms of the error matrices
% error_n = sqrt(sum(sum(J_diffn.*J_diffn)/16))
loglog(h,norm_error_matrix,'b.','MarkerSize',10); % use a loglog plot to plot the matrix norms against the step-size h
hold on
end
end
xlabel('step-size $h$', 'interpreter','latex','FontSize',12) % create an x-label
ylabel('$\rm{error}_n$','interpreter','latex','FontSize',12) % create a y-label
title('Norm of Error Matrix .vs step-size $h$','interpreter','latex','FontSize',12) % create a title
grid on % use grid lines
hold off % use hold off, after the figure is finished
  5 Comments

Sign in to comment.

Accepted Answer

John D'Errico
John D'Errico on 6 May 2025
Edited: John D'Errico on 6 May 2025
I explained EXACTLY this in my last answer to your last question. Please read what I write and think about it. If you don't then I will decide I am wasting my time. I'm sorry, but you need to read and think. Don't just keep on writing the same code and asking the same questions.
The example I used was a simple one, but it carries over into what you are doing. Remember that error has many sources, It comes from various places, but in the end, it is still error.
I'll compute the error of the first derivative of exp(x), at x==0, using a simple forward finite difference method. I could use essentially ANY finite difference method, and we would see the same sort of thing.
F = @exp;
hlist = 10.^(-15:1);
x0 = 0;
Fprime = (F(x0 + hlist) - F(x0))./hlist;
Fprimetrue = 1; % since the true derivative at x == 0 is just 1.
loglog(hlist,abs(Fprime - Fprimetrue),'o-')
grid on
xlabel 'Step size, h'
ylabel 'Error in dF/dx'
title 'Forward difference error'
Now, LOOK. Start at the right. This is a very low order method, but that does not matter really. As the step size decreases, the error decreases. GREAT. That is to be expected. In fact, as h decreases, we expect the error to decrease as O(h), right? It should decrese linearly with h. And in fact, we see that exact behavior for a long stretch. Cut h by a factor of 10, and the error drops by a factor of 10. WOW! It works!
But what happens near h around 1e8? Things start to go to hell with that nice linear decrease in error. Why would that be? For this, we need to think about the derivative, and how we are computing it.
F'(x) = (F(x + h) - F(x))/h
At x near zero, F(x) will be close to 1. TRY IT!!!!!!!!!!
format long g
F(1)
ans =
2.71828182845905
F(1 + 1e-8)
ans =
2.71828185564186
The problem is, these are DOUBLE PRECISION NUMBERS. We know them to only roughly 16 significant digits. Past that point, any digits are just floating point trash. But, now subtract those two nearly equal numbers. What happens?
F(1 + 1e-8) - F(1)
ans =
2.71828182185629e-08
Is that the true value of that difference, at least if we could use a higher precision arothmetic?
vpa(F(sym(1) + sym(10)^-8) - F(sym(1)))
ans = 
0.000000027182818420504544229602109023998
Hmm. Is this the same as what we see when I did the same computation using doubles? NO. In fact, we only have roughly 8 correct digits in the double version, once we get past the zeros. Now, divide that by a very small number, and the error also explodes, by a factor here of 1e8.
This behavior is called subtractive cancellation. It causes you to lose accuracy when you subtract two nearly equal numbers. I'll try it again, for even smallder values of h, to make this clear.
[F(1), F(1 + 1e-14), F(1 + 1e-14) - F(1)]
ans = 1×3
2.71828182845905 2.71828182845907 2.70894418008538e-14
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
vpa(F(sym(1) + sym(10)^-14) - F(sym(1)))
ans = 
0.000000000000027182818284590588267694297902503
Do you see? When the increment was as small as 1e-14, we got only two significant digits correct in the difference.
What is happening is as the step gets very small, the error is becoming dominated by the massive subtractive cancellation. Even though the technical accuracy of the derivative estimate should be going to zero, it does not, because past 1e-8, the subtractive cancellation term begins to dominate. And that domination grows as the step size goes to zero. This is an artifact of the use of double precision arithmetic, because these numbers are only known to arouund 16 digits.
As I showed with some effort in my LAST answer, it does not even help if you use a higher order method. Things still go to hell for small step sizes. It does not help. Must I repeat that? This time I'll use a central difference, since that is what you are using.
F = @exp;
hlist = 10.^(-15:1);
x0 = 0;
Fprime = (F(x0 + hlist) - F(x0 - hlist))./(2*hlist); % central difference
Fprimetrue = 1; % since the true derivative at x == 0 is just 1.
loglog(hlist,abs(Fprime - Fprimetrue),'o-')
grid on
xlabel 'Step size, h'
ylabel 'Error in dF/dx'
title 'Central difference error'
Again, start at the right end. The slope of the log log plot is higher. In fact, as we decrease h by a factor of 10, we cut the error by a factor of 100 at each step. This makes sense, since the central difference is O(h^2). GREAT!
But at a certain point, here, around 1e-5 for the step, the subtractive cancellation term again starts to rear its ugly head, as it again begins to dominate. The error from the central differecne approximation to the derivative is now becoming dominated by subtractive cancellation.
Can you fix this by a higher order method yet? GO BACK AND READ MY LAST ANSWER! In there, I show exactly the same behavior. In fact, because the coefficients in there are larger than 1. it actually gets worse. The initial rate of decrease of the error is faster, but once you get into the domain of subtractive cancellation, things go completely to hell.
The problem lies in the use of double precision arithmetic. Were I to use quad precision, or some other tool, perhaps syms or my own HFP toolbox, we could do well.
F = @exp;
digits 100
hlist = sym(10).^(-15:1);
x0 = sym(0);
Fprime = (F(x0 + hlist) - F(x0 - hlist))./(2*hlist); % central difference
Fprime = double(Fprime); % Convert back to doubles
Fprimetrue = 1; % since the true derivative at x == 0 is just 1.
loglog(hlist,abs(Fprime - Fprimetrue),'o-')
grid on
xlabel 'Step size, h'
ylabel 'Error in dF/dx'
title 'Central difference error, using syms'
Do you see it has now worked? I have removed the subtractive cancellation component in the error. The cost of doing so was I needed to work in high precision arithmetic. But now we see a nearly perfectly quadratic decrease. (It looks like the plot stopped at 1e-15 for the error, BECAUSE I converted the symbolic results into doubles. below that point, it was still going down, but the loglog plot just gave up the ghost.)
So how do you fix this? Either you need to learn to accept the concept of subtractive cancellation, and how it will limit your results, or you need to work in a higher precision arithmetic. Again, please read what I have written here, and think about what I wrote, as I have no intention of explaining the same thing a third time.
  7 Comments
Noob
Noob on 7 May 2025

Hi @Torsten!

I’m not sure why I abandoned that workflow; I think my thoughts were wanting to quickly change parameter values without having to use matlabFunction again and again to convert from symbolic to numerical. Maybe I should revisit that workflow? Hmm …

Torsten
Torsten on 7 May 2025
You don't need matlabFunction at all. All you need is to insert the solution from "fsolve" into the symbolic Jacobian via the "subs" command.

Sign in to comment.

More Answers (0)

Products


Release

R2024b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!