Spacesium Creates Deep Learning System to Segment Large Lidar Point Clouds with MATLAB
Processing very large lidar point clouds is slow and expensive. With MATLAB, Spacesium developed a deep learning solution that can label lidar point clouds with better accuracy and increased speed.
Key Outcomes
- Processing time reduced by subsampling and blocking with Lidar Toolbox
- Accuracy increased by denoising point clouds with Computer Vision Toolbox
- Object segmentation improved through deep learning models replacing feature-based methods
Spacesium is an Australia-based company that develops geographic information system software for different industries. One of the difficulties Spacesium must address regularly is ensuring the rapid and accurate processing of lidar point clouds. Sometimes, lidar cloud points can be very large and noisy, making it difficult to extract useful information in time.
To address these challenges, Spacesium has created a deep learning solution in MATLAB® that can quickly process and label very large lidar point clouds. The team uses Lidar Toolbox™ to downsample the point clouds and reduce their size without significant information loss. If the point cloud is very large, Lidar Toolbox is used to break it down into smaller blocks that can be processed independently. The team uses Computer Vision Toolbox™ to denoise the point clouds and increase the accuracy of the results.
Traditional feature-based methods for processing point cloud data can be slow and imprecise. With Deep Learning Toolbox™, Spacesium trained a PointNet++ network, which can be created with a single function, to segment the objects represented in the point clouds. The extracted data is stored in a separate file to be used in other applications along with the original point cloud data.
This deep learning solution is being used in various industries, including forestry, powerline and infrastructure analysis, disaster management, and mining.