Slow down when accessing arrays

I have some code that really slows down when storing results. Code 2 runs orders of magnitude faster than Code1, especially as k increase in size (e.g. k = 500). I would need to store the results from each iteration i, but it seems very costly computationally. Why is this? Is there a better way to code it.
Code 1:
Q=zeros(k,k,N)
for i=1:N
Q(:,:,i) = Q(:,:,i) + e(i)^2*XtXt;
end
Code 2:
Q=zeros(k,k)
for i=1:N
Q = Q + e(i)^2*XtXt;
end
Here is the entire function if it is helpful. There are a lot of loops, but I found a clever way to eliminate any of them.
function varBhat = NeweyWest(e, X, L, invXX )
% e = (T x N)
% X = (T x k)
% L = scalar
% invXX = (k x k)
% Determine the size of the matrix of regressors
[T, k] =size(X);
% Determine the number of units
N = size(e,2);
% Calculate the Newey-West autocorrelation consistent covariance
Q = zeros(k,k,N);
for l = 0:L
w_l = 1-l/(L+1);
for t = l+1:T
if (l==0) % This calculates the S_0 portion
XtXt = compute_XtXt(X,t); %(k x k)
% reuse the computed matrix for all of N units
for i=1:N
Q(:,:,i) = Q(:,:,i) + e(t,i)^2 *XtXt;
end
else % This calculates the off-diagonal terms
XtXl = compute_XtXl(X,t,l,w_l); % (k x k)
% reuse the computed matrix for all of the generators
for i=1:N
Q(:,:,i) = Q(:,:,i) + e(t,i)*e(t-l,i)*XtXl;
end
end
end
end
Q = 1/(T-k) * Q;
% Calculate Newey-White standard errors (loops over each unit)
varBhat = finalNW(T, X, Q, invXX, N);
end

2 Comments

Is XtXl a scalar, or is it a k x k array?
Yeah, I wasn't specific.
XtXl is a kxk matrix
e(i) is a scalar
Q is a kxk matrix

Sign in to comment.

Answers (2)

Nathaniel
Nathaniel on 25 May 2012
How much memory in your computer, and how big is N? If you run out of available physical memory, then you will spend a lot of time waiting while your O/S swaps to and from your hard disk.

1 Comment

It is not a issue with running out of memory. The slow down occurs gradually with k.
I was testing it with k=40 and N=10. Importantly this loop is nested inside of another loop so it is being executed many, many times.
Eventually k and N will be large, 1000 and 300 respectively, but this shouldn't create memory problems.

Sign in to comment.

Image Analyst
Image Analyst on 26 May 2012
Well yeah! If the second case you're just overwriting a scalar N times so you're just doing N operations. In the first case you're assigning a kxk matrix N times. And with k = 500 that means 250,000 times N memory locations are being assigned. So I don't doubt that case 1 would run hundreds of times slower simply because you're assigning and storing hundreds of thousands more elements. What is the value of N? If it's less than about 5, then this should still happen in just a few seconds. But if N is also like 500, then it could take a very long time, and you might even run out of memory.

5 Comments

In the second case, Q is still a k x k matrix ( Q=zeros(k,k) ), but the difference is that it is overwriting the same matrix, rather than storing it in each loop. Suppose there is enough memory available. Is there a difference between overwriting a matrix or storing it in a 3D array?
N and k will eventually be large (300 and 1000), but I am experiencing slow downs when k=40 and N = 10. Importantly this loop is nested inside of another loop so it is being executed many, many times.
OK, you're right that Q is a 2D matrix but a 2D matrix is still a lot less memory than a 3D matrix. It matters how you store things because they say that going down columns (incrementing rows first before moving over to the next column) is faster than going across rows first. I'm not sure why - yeah the next memory location is far away in memory rather than the very next memory location, but so what - we're talking ram memory here, not a hard drive. But apparently it does matter for some reason. So because your 3D matrix will involve memory farther away, it will take longer.
Fast CPUs can issue requests for chunks of memory on each read, as that reduces the average number of cycles the bus is occupied under the presumption that sequential memory will be used. The size of the chunks varies with the CPU, and the number of banks of memory can also make a difference. This kind of memory handling is generally known as "look-ahead cache".
On one of the servers I used to use (now obsolete), the optimum was 16 bytes (two double precision numbers) times 8 banks.
Thanks for the thoughts guys.
I compiled this to a mex file (using matlab coder) to speed up the looping. Interestingly when it runs, the total CPU usage runs at about 12% on quad core machine. It seems like I have a memory bottleneck rather than a CPU bottle neck.
I also defined this as a 2D array rather than a 3D array, but as expected it didn't make any difference.
Not a memory bottleneck. The 12% usage comes from the mex file being serially coded and the Intel's hyperthreading (HT) being enabled. If I disable HT, then the mex code uses 25% of CPU resources which one full core maxed out.

Sign in to comment.

Asked:

on 25 May 2012

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!