How to use fetchNext correctly?

26 views (last 30 days)
Marc Laub
Marc Laub on 10 Jun 2020
Commented: Marc Laub on 11 Jun 2020
Hey,
I have to functions which I want to calculate on a parrallel pool.
The input of the second function is the output of the first function.
So right now it looks like this:
seeds=NaN(num_pts,3,volume/4);
for i=1:volume/4
pts(i)=parfeval(@partial_seeder,1,min_grain,num_pts);
wait(pts)
ptRads(i) = parfeval(@seeder,1,pts(i).OutputArguments{1});
wait(ptRads)
seeds(:,1:2,i)=pts(i).OutputArguments{1};
seeds(:,3,i)=ptRads(i).OutputArguments{1};
end
But this way i have to wait for all partial_seeder functions to finish until I can start to continue with the results. How is it possible to directly continue with the results(i) in the next parfeval funtion, even if results (j) or results (k) aren't finished in the first parfeval yet.
I am not sure how to use fetchNext correctly in this case? I would have to call fetchNext volume/4 times but what if I call it too fast and the jobs arent finished yet?
Many Thanks in advance,
Best regards

Accepted Answer

Edric Ellis
Edric Ellis on 11 Jun 2020
It looks like your seeder function depends only on the output of the corresponding call to partial_seeder. It might also be simpler to use parfor here. For example, you could do simply this:
seeds=NaN(num_pts,3,volume/4);
for i=1:volume/4
tempResult = NaN(num_pts, 3);
tempResult(:,1:2) = partial_seeder(min_grain,num_pts);
tempResult(:,3) = seeder(tempResult(:, 1:2));
seeds(:, :, i) = tempResult;
end
Another option would be to make each parfeval request call both partial_seeder and then seeder. I.e.:
% This function calls both partial_seeder and seeder in turn, and returns a matrix
function out = doBoth(min_grain, num_pts)
ps_result = partial_seeder(min_grain, num_pts);
s_result = seeder(ps_result);
out = [ps_result, s_result];
end
% Launch all parfevals
for i=1:volume/4
f(i) = parfeval(@doBoth, 1, min_grain, num_pts);
end
% Collect all results
seeds=NaN(num_pts,3,volume/4);
for i=1:volume/4
[idx, result] = fetchNext(f);
seeds(:, :, idx) = result;
end
Either of the above approaches should be reasonable unless volume/4 is small compared to the number of workers you have. One final approach would be to launch the seeder computations when the partial_seeder calls complete:
seeds=NaN(num_pts,3,volume/4);
% Step 1: launch partial_seeders
for i=1:volume/4
psF(i) = parfeval(@partial_seeder, 1, min_grain, num_pts);
end
% Step 2: collect partial_seeder results when available, launch seeders
for i=1:volume/4
[idx, result] = fetchNext(psF);
sF(idx) = parfeval(@seeder, 1, result);
end
% Step 3: collect seeder result
fetchOutputs(sF)
  1 Comment
Marc Laub
Marc Laub on 11 Jun 2020
I have 8 workers and depending on the setup volume/4 goes from 4 up to 4^some_integer. But even if its only 4 i would calculate 4 in parrallel compared to 4 one after another. Since the seeder function is quite time intensive it should be a win.
I wrote it now like this:
seeds=NaN(num_pts,3,volume/4);
for i=1:volume/4
pts(i)=parfeval(@partial_seeder,1,min_grain,num_pts);
end
for i=1:volume/4
[id,out]=fetchNext(pts);
seeds(:,1:2,id)=out;
ptRads(id) = parfeval(@seeder,1,out);
end
wait(ptRads)
for i=1:volume/4
seeds(:,3,i)=ptRads(i).OutputArguments{1};
end
seeds=reshape(permute(seeds,[2,1,3]),size(seeds,2),[])';
it works, but since seeder is a such time consuming function which is only running on 1 worker, I expected more than a 30% gain in time when i have 8 workers and let it run 16 times compared to only 1 worker where its running 16 times.
I'm trying parfore at the moment..
could you tell me, this you also answered my other question why it isa slicing constraint? Does Matlab think that it might be possible that seeds(:,1:2,i) and seeds(:,3,i) arent independend? or what is the problem excactly?
Best regards

Sign in to comment.

More Answers (0)

Categories

Find more on Asynchronous Parallel Programming in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!