Lidar Toolbox™ provides lidar camera calibration functionality, which is an essential step in combining data from lidar and a camera in a system. Cameras provide rich color information, while lidar sensors provide accurate 3D structural and locational information of objects. When fused, you can enhance the performance of perception and mapping algorithms for autonomous driving and robotics applications.
Lidar-camera calibration helps to estimate relative position at orientation between a lidar and a camera in the system. Cameras provide rich color information, while lidar sensor provides an accurate 3D structural and location information of objects. When fused together, you can enhance the performance of perception and mapping algorithms for autonomous driving and robotics applications. In this video, I'll demonstrate lidar-camera calibration process using a checkerboard calibration pattern.
Lidar-camera calibration involves calculation of extrinsic parameters of lidar-camera system in the form of a rigid transformation matrix. Extrinsic parameters define the location and orientation of the sensors with respect to the world frame and with each other. Lidar toolbox provides all the necessary functions to perform lidar camera calibration.
We can load and then extract checkerboard features from images and corresponding point clouds. We then use these features to estimate transformation between camera and lidar.
First, load checkerboard images and corresponding lidar data. The checkerboard data is used because its regular pattern makes it easier to extract features. Here, we are using nine checkerboard images and their corresponding point cloud collected from Gazebo environment.
Next, we'll load the intrinsic parameters of the camera. Intrinsic parameters define the internal characteristics of the camera, such as focal length, optical center, and lens distortion coefficient. We can use MATLAB's camera calibrator app to extract intrinsic parameters of the camera. The camera calibrator app provides an easy and interactive interface for camera calibration.
We'll now extract checkerboard feature from images using estimateCheckerboardCorners3D function and used detectRectangularPlanePoints function to extract features from point cloud data.
Now, we'll use the estimateLidarCameraTransform function to estimate widget transformation metrics between camera and lidar. We can visualize the calibration output by projecting lidar data on image or by projecting color information from camera and lidar data. You can see here that the output from camera and lidar aligns properly, which means our calibration results are good.
We can also evaluate the calibration results by plotting calibration errors between checkerboard images and corresponding point clouds. Here, we got an average translation error of 3.5 millimeters, average rotational error of 0.6 degrees, and reproduction error of around 1 pixel.
You can follow the same workflow for lidar-camera calibration on real data. This can be further extended for different applications like estimating coordinates of 3D bounding boxes in lidar data from 2D bounding boxes in the corresponding image and fuse collect information from camera on point cloud data.
Refer to MathWorks documentation and lidar toolbox product pages to learn more. If you have any questions or comments, please let us know.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .Select web site
You can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.