Extract Vehicle Track List from Recorded Lidar Data for Scenario Generation
This example shows how to extract the vehicle track-list information required for generating a scenarios from raw lidar data.
Track lists are crucial for interpreting the positions and trajectories of the vehicles in a scenario. You can use track lists to extract dynamic actor properties.
In this example, you:
Detect the vehicles from recorded lidar data using a pretrained
pointPillarsObjectDetector
(Lidar Toolbox) object.Generate a vehicle track list from the detected vehicles using a joint probabilistic data association (JPDA) tracker.
Format the vehicle track list for use in the scenario generation workflow.
You can use the extracted vehicle track list to generate a scenario from recorded sensor data. For more information about creating scenarios from vehicle track lists, see the Generate Scenario from Actor Track List and GPS Data example. For more information on generating scenarios for RoadRunner Scenario, see the Generate RoadRunner Scenario from Recorded Sensor Data example.
Load Lidar Data
Download a ZIP file containing a subset of sensor data from the PandaSet data set and then unzip the file. This data set contains a sequence of 400 lidar point clouds and lidar sensor information. The size of the downloaded data set is 394 MB.
dataFolder = tempdir; lidarDataFileName = "PandasetLidarData.zip"; url = "https://ssd.mathworks.com/supportfiles/driving/data/" + lidarDataFileName; filePath = fullfile(dataFolder,lidarDataFileName); if ~isfile(filePath) websave(filePath,url) end
ans = '/tmp/PandasetLidarData.zip'
unzip(filePath,dataFolder) dataset = fullfile(dataFolder,"PandasetLidarData"); data = load(fullfile(dataset,"LidarSensorData.mat")); lidarData = data.LidarData;
lidarData
is a table with two columns:
timeStamp
— Time, in seconds, at which the point cloud data was captured.fileName
— Filenames of the point clouds in the data set.
The point clouds are located in the Lidar
folder in the dataset
directory. Create a table that contains the file paths of these point clouds with their relative timestamps by using the helperUpdateTable
function.
lidarFolder = "Lidar";
lidarData = helperUpdateTable(lidarData,dataset,lidarFolder);
Display the first five entries of lidarData
.
lidarData(1:5,:)
ans=5×2 table
timeStamp filePath
__________ ___________________________________________
1.5576e+09 {["/tmp/PandasetLidarData/Lidar/0001.pcd"]}
1.5576e+09 {["/tmp/PandasetLidarData/Lidar/0002.pcd"]}
1.5576e+09 {["/tmp/PandasetLidarData/Lidar/0003.pcd"]}
1.5576e+09 {["/tmp/PandasetLidarData/Lidar/0004.pcd"]}
1.5576e+09 {["/tmp/PandasetLidarData/Lidar/0005.pcd"]}
Read and display the first point cloud from lidarData.
ptCld = pcread(lidarData.filePath{1}); ax = pcshow(ptCld); zoom(ax,4)
Download a ZIP file containing the camera images for the same subset of the PandaSet data set, and then unzip the file. In this example, you use the camera data for visual validation of the tracked actors. The size of the downloaded data set is 104 MB.
sensorDataFilename = "PandasetSensorData.zip"; url = "https://ssd.mathworks.com/supportfiles/driving/data/" + sensorDataFilename; filePath = fullfile(dataFolder,sensorDataFilename); if ~isfile(filePath) websave(filePath,url) end unzip(filePath,dataFolder) dataset = fullfile(dataFolder,"PandasetSensorData"); data = load(fullfile(dataset,"sensorData.mat")); monocamData = data.CameraData;
monocamData
is a table with two columns:
timeStamp
— Time, in seconds, at which the image data was captured.fileName
— Filenames of the images in the data set.
The images are located in the Camera
folder in the dataset
directory. Create a table that contains the file paths of these images with their relative timestamps by using the helperUpdateTable
function.
imageFolder = "Camera";
monocamData = helperUpdateTable(monocamData,dataset,imageFolder);
Display the first five entries of monocamData
.
monocamData(1:5,:)
ans=5×2 table
timeStamp filePath
__________ _____________________________________________
1.5576e+09 {["/tmp/PandasetSensorData/Camera/0001.jpg"]}
1.5576e+09 {["/tmp/PandasetSensorData/Camera/0002.jpg"]}
1.5576e+09 {["/tmp/PandasetSensorData/Camera/0003.jpg"]}
1.5576e+09 {["/tmp/PandasetSensorData/Camera/0004.jpg"]}
1.5576e+09 {["/tmp/PandasetSensorData/Camera/0005.jpg"]}
Read and display the first image from the monocamData.
img = imread(monocamData.filePath{1}); imshow(img)
Detect Vehicles from Lidar Data
In this example, you use a pretrained pointPillarsObjectDetector
(Lidar Toolbox) deep neural network model to detect vehicles from lidar data. Download a ZIP file containing the pretrained model, and then unzip it. The size of the downloaded model is 9 MB.
modelFilename = "LidarVehicleDetectorPointPillars.zip"; modelUrl = "https://ssd.mathworks.com/supportfiles/driving/data/" + modelFilename; filePath = fullfile(dataFolder,modelFilename); if ~isfile(filePath) websave(filePath,modelUrl) end
ans = '/tmp/LidarVehicleDetectorPointPillars.zip'
unzip(filePath,dataFolder) modelFolder = fullfile(dataFolder,"LidarVehicleDetectorPointPillars"); model = load(fullfile(modelFolder,"lidarVehicleDetectorPointPillars.mat")); lidarVehicleDetector = model.detector;
Specify these parameters for the lidar vehicle detection model.
Threshold
— Confidence score threshold for boundary detection. The model ignores detections with a confidence score less than the threshold value. If you observe false detections, try increasing this value.xLimits
— Range of coordinates along the X-axis.yLimits
— Range of coordinates along the Y-axis.
params.threshold = 0.4; params.xLimits = [-12 12]; params.yLimits = [0 60];
Detect vehicles for each point cloud in lidarData
by using the helperDetectVehicles
function. Note that, depending on your hardware configuration, this function can take a significant amount of time to run.
vehicleDetections = helperDetectVehicles(lidarVehicleDetector,lidarData,params);
vehicleDetections
is an M
-by-3 table containing the detected vehicle bounding boxes for each point cloud, their detection scores, and their labels. M
is the total number of point clouds.
Read a point cloud, and extract the detected vehicle bounding boxes.
fileIdx = 1; ptCloud = pcread(lidarData.filePath{fileIdx}); bboxes = vehicleDetections.Bboxes{fileIdx};
Overlay the vehicle detections on the point cloud, and display the overlaid point cloud.
posViewCrop = find(ptCloud.Location(:,:,1) < params.xLimits(2) ... & ptCloud.Location(:,:,1) > params.xLimits(1) ... & ptCloud.Location(:,:,2) < params.yLimits(2) ... & ptCloud.Location(:,:,2) > params.yLimits(1)); ptCloud = select(ptCloud,posViewCrop,OutputSize="full"); ax = pcshow(ptCloud.Location); showShape(cuboid=bboxes,Parent=ax,Opacity=0.1,Color="green",LineWidth=1)
Generate Tracks from Detections
You must assign detections to distinct tracks to determine the entry and exit times of vehicles. If your detections contain noise, you can smooth the noisy detections using a JPDA tracker. To track vehicle detections in this example, you initialize the JPDA tracker using an interacting multiple model (IMM) filter with constant velocity and constant turn motion models.
Define a JPDA tracker using a trackerJPDA
(Sensor Fusion and Tracking Toolbox) System object™ with these properties:
ConfirmationThreshold
— Specified as[4 7]
, to confirm the track if the lane is detected for at least 4 out of the past 7 detections.DeletionThreshlod
— Specified as[3 3]
, to remove a track if the track is missing from the 3 most recent consecutive detections.AssignmentThreshold
— Specified as[50 200]
, this sets the normalized maximum distance to associate a detection to a track.
vehicleTracker = trackerJPDA(FilterInitializationFcn=@helperMultiClassInitIMMFilter,... TrackLogic="History", ... AssignmentThreshold=[50 200], ... ClutterDensity=1e-12, ... ConfirmationThreshold=[4 7], ... DeletionThreshold=[3 3], ... InitializationThreshold=0.1, ... HitMissThreshold=0.2);
Specify the measurement noise measNoise
. The measurement noise describes the uncertainty in the measurements.
measNoise = blkdiag(0.25*eye(3),25,eye(3));
Create an object to display the tracks overlaid on the point clouds, and the respective camera frames, by using the helperLidarVisualizer
class.
displayObj = helperLidarVisualizer; initializeDisplay(displayObj)
Process the vehicle detections for each frame, and assign tracks if the detection meets the assignment criteria.
% Initialize empty actorTracklist object to store the track data. tracklist = actorTracklist; % Get the list of all bounding boxes for each frame from the detector output. Bboxes = vehicleDetections.Bboxes; for i = 1:size(lidarData,1) % Read the current frame. ptCloud = pcread(lidarData.filePath{i}); % Get the bounding boxes for current frame. bboxes = Bboxes{i}; % Get the timestamp. curTimeStamp = lidarData.timeStamp(i); % Convert the bounding boxes to object detections. measurements = helperAssembleDetections(bboxes,measNoise,curTimeStamp); % Return the updated tracks for the current timestamp. [confirmedTracks] = vehicleTracker(measurements,curTimeStamp); % Extract the bounding boxes and TrackID from the confirmed tracks. vehicleBboxes = helperExtractBboxes(confirmedTracks); if ~isempty(vehicleBboxes) for j = 1:numel(confirmedTracks) % Get the trackid of confirmed tracks. trackid = string(confirmedTracks(j).TrackID); % Set the class id to 1 for 'vehicle' class. classid = 1; vehiclePosition = {[vehicleBboxes(j,1:3)]}; % Add the data to the actorTracklist object. tracklist.addData(curTimeStamp, trackid, classid, vehiclePosition); end end % Update the display with the current point cloud. ptCloudUnOrg = removeInvalidPoints(ptCloud); updatePointCloud(displayObj,ptCloudUnOrg) % Update the display with the current front-view image. frontViewImage = imread(monocamData.filePath{i}); updateImage(displayObj,frontViewImage) % Update the display, if the tracks are not empty. if ~isempty(confirmedTracks) updateTracks(displayObj,confirmedTracks) end drawnow("limitrate") end
Display properties of tracklist
.
tracklist
tracklist = actorTracklist with properties: TimeStamp: [397×1 double] TrackIDs: {397×1 cell} ClassIDs: {397×1 cell} Position: {397×1 cell} Dimension: [] Orientation: [] Velocity: [] Speed: [] StartTime: 1.5576e+09 EndTime: 1.5576e+09 NumSamples: 397 UniqueTrackIDs: [41×1 string]
You can use the actorprops
function to extract actor properties from tracklist
, for use in generating a driving scenario. For more information on how to generate a scenario from tracklist, see the Generate Scenario from Actor Track List and GPS Data example. For information on how to generate a scenario for RoadRunner Scenario, see the Generate RoadRunner Scenario from Recorded Sensor Data example.
Helper Functions
helperExtractBboxes
— Extract the vehicle bounding boxes from the tracks.
function bboxes = helperExtractBboxes(tracks) positionSelector = [1 0 0 0 0 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0 0 0; 0 0 0 0 0 1 0 0 0 0 0]; yawSelector = [0 0 0 0 0 0 0 1 0 0 0]; dimensionSelector = [0 0 0 0 0 0 0 0 0 1 0; 0 0 0 0 0 0 0 0 1 0 0; 0 0 0 0 0 0 0 0 0 0 1]; % Retrieve bounding boxes from the tracks. positions = getTrackPositions(tracks,positionSelector); yaws = getTrackPositions(tracks,yawSelector); dimensions = getTrackPositions(tracks,dimensionSelector); bboxes = [positions dimensions zeros(size(positions,1),2) yaws]; end
helperAssembleDetections
— Convert the detected vehicle bounding boxes to object detections, for use in the tracker.
function mydetections = helperAssembleDetections(bboxes,measNoise,timestamp) % Assemble bounding boxes as a cell array of objectDetection objects. mydetections = cell(size(bboxes,1),1); for i = 1:size(bboxes,1) % This example assumes that all vehicles are cars. classid = 1; lidarModel = [bboxes(i,1:3) bboxes(i,end) bboxes(i,4:6)]; mydetections{i} = objectDetection(timestamp, ... lidarModel',MeasurementNoise=measNoise, ... ObjectAttributes=struct("ClassID",classid)); end end
See Also
Functions
Objects
pointPillarsObjectDetector
(Lidar Toolbox) |trackerJPDA
(Sensor Fusion and Tracking Toolbox)
Related Topics
- Overview of Scenario Generation from Recorded Sensor Data
- Generate Scenario from Actor Track List and GPS Data
- Generate High Definition Scene from Lane Detections and OpenStreetMap
- Extract Lane Information from Recorded Camera Data for Scene Generation
- Generate RoadRunner Scene from Recorded Lidar Data
- Smooth GPS Waypoints for Ego Localization
- Ego Vehicle Localization Using GPS and IMU Fusion for Scenario Generation