It is often necessary to scale a technical computing problem involving a small amount of data to a much larger data set. Simply looping over each section of the data can become a computational bottleneck, especially if the application has to run in real time. MATLAB® offers several approaches for accelerating algorithms, including performing computations in parallel on multicore processors and GPUs. If you have an NVDIA GPU available, one approach is to leverage the parallel architecture and throughput of the GPU with Parallel Computing Toolbox TM. Certain classes of problems, especially in computational geometry and visualization, can be solved very efficiently on a GPU.
In this submission we will modify an algorithm to run on a GPU, and then solve a geometric problem involving millions of lines and shapes in under a second. We illustrate this approach using the problem of tracing light rays as they intersect with objects. This type of problem is present in a variety of applications, including scene rendering and medical imaging.
This code accompanies the article "Solving Large Geometric and Visualization Problems with GPU Computing in MATLAB" (http://www.mathworks.co.uk/company/newsletters/articles/solving-large-geometric-and-visualization-problems-with-gpu-computing-in-matlab.html)
Paul Peeling (2020). RayShapeArticle_FEX.zip (https://www.mathworks.com/matlabcentral/fileexchange/46502-rayshapearticle_fex-zip), MATLAB Central File Exchange. Retrieved .
the motive for the upload is so much of my taste.
i like the compilation
Probably one can explain it like this:
As long as the algorithm calls the loop with less elements then available cores the execution time is not increasing with the number of elements.
A question about the article: why is it, that the curve characterizing the GPU implementation is not monotonically increasing?
Corrected order of legend entries when running the benchmark scripts. Thanks to Vadim Bulitko for spotting this.
Added link to article.
Inspired by: Triangle/Ray Intersection