Ground-truth labeling is the process of annotating recorded sensor data with information on objects, conditions, and events in a vehicle’s surrounding. Labeled ground-truth data is then used to test the performance of perception systems by comparing the output of a perception algorithm with the labeled ground truth. Ground-truth labeling is typically performed on video data and is usually a time-intensive manual process. Automated Driving System Toolbox™ provides an app and workflow to automate the labeling of ground-truth data.
Automated Driving System Toolbox uses deep learning and computer vision algorithms to automate the labeling of ground truth with detection and tracking algorithms in the Ground Truth Labeler app. The app lets you import your own algorithms to automate the labeling of ground truth.
The system toolbox also provides tools to compare the output of a perception algorithm versus labeled ground truth.
Automated driving systems use vision, radar, ultrasound, and combinations of sensor technologies to automate dynamic driving tasks. These tasks include steering, braking, and acceleration. Automated driving spans a wide range of automation levels — from advanced driver assistance systems (ADAS) to fully autonomous driving. The complexity of automated driving tasks makes it necessary for the system to use information from multiple complementary sensors, making sensor fusion a critical component of any automated driving workflow.
Automated Driving System Toolbox provides functions and tools to track outputs from various sensors over time, and to combine output from multiple sensors to perform sensor fusion. For object tracking, the system toolbox provides several Kalman filters including linear, extended, and unscented variants.
For rapid prototyping and embedded implementation, the tracking and sensor fusion algorithms in Automated Driving System Toolbox generate C code using MATLAB Coder™.
Explore gallery (2 images)
Automated Driving System Toolbox provides a suite of computer vision algorithms that use data from cameras to detect and track objects of interest such as lane markers, vehicles, and pedestrians. Algorithms in the system toolbox are tailored to ADAS and autonomous driving applications.
Object detection is used to locate objects of interest such as pedestrians and vehicles to help perception systems automate braking and steering tasks. The system toolbox provides functionality to detect vehicles, pedestrians, and lane markers through pretrained detectors using machine learning, including deep learning, as well as functionality to train custom detectors.
Automated Driving System Toolbox provides the ability to create the output of a monocular camera sensor from raw video. The output is in the format of a list of detected vehicles, lane boundaries, and estimated distance to objects.
Sensor fusion and control algorithms for automated driving systems require rigorous testing. Vehicle-based testing is not only time consuming to set up, but also difficult to reproduce. Automated Driving System Toolbox provides functionality to define road networks, actors, vehicles, and traffic scenarios, as well as statistical models for simulating synthetic radar and camera sensor detection.
The object lists generated for each traffic scenario can also be used to setup HIL tests using this synthetic data.
The system toolbox provides a workflow that enables the testing of control or sensor fusion algorithms using synthetic data generated from a specific traffic scenario.
The system toolbox provides visualization tools customized for ADAS and autonomous driving workflows to aid with the design, debugging, and testing of automated driving systems.