MATLAB and Simulink Training

Automated Driving with MATLAB

View schedule and enroll

Course Details

This two-day course provides hands-on experience with developing and verifying automated driving perception algorithms. Examples and exercises demonstrate the use of appropriate MATLAB® and Automated Driving Toolbox™ functionality.

Topics include:

  • Labeling of ground truth data
  • Visualizing sensor data
  • Detecting lanes and vehicles
  • Processing lidar point clouds
  • Tracking and sensor fusion
  • Generating driving scenarios and modeling sensors

Day 1 of 2


Labeling of Ground Truth Data

Objective: Label ground truth data in a video or sequence of images interactively. Automate the labeling with detection and tracking algorithms.

  • Overview of the Ground Truth Labeler app
  • Label regions of interest (ROIs) and scenes
  • Automate labeling
  • View and export ground truth results

Visualizing Sensor Data

Objective: Visualize camera frames, radar, and lidar detections. Use appropriate coordinate systems to transform image coordinates to vehicle coordinates and vice versa.

  • Create a bird’s-eye plot
  • Plot sensor coverage areas
  • Visualize detections and lanes
  • Convert from vehicle to image coordinates
  • Annotate video with detections and lane boundaries

Detecting Lanes and Vehicles

Objective: Segment and model parabolic lane boundaries. Use pretrained object detectors to detect vehicles.

  • Perform a bird’s-eye view transform
  • Detect lane features
  • Compute lane model
  • Validate lane detection with ground truth
  • Detect vehicles with pretrained object detectors

Processing Lidar Point Clouds

Objective: Work with lidar data stored as 3-D point clouds. Import, visualize, and process point clouds by segmenting them into clusters. Register point clouds to align and build an accumulated point cloud map.

  • Import and visualize point clouds
  • Preprocess point clouds
  • Segment objects from lidar sensor data
  • Build a map from lidar sensor data

Day 2 of 2


Fusing Sensor Detections and Tracking

Objective: Create a multi-object tracker to fuse information from multiple sensors such as camera, radar and lidar.

  • Track multiple objects
  • Preprocess detections
  • Utilize Kalman filters
  • Manage multiple tracks
  • Track with multi-object tracker

Tracking Extended Objects

Objective: Create a probability hypothesis density tracker to track extended objects and estimate their spatial extent.

  • Define sensor configurations
  • Track extended objects
  • Estimate spatial extent

Generating Driving Scenarios and Modeling Sensors

Objective: Create driving scenarios and synthetic radar and camera sensor detections interactively to test automated driving perception algorithms.

  • Overview of the Driving Scenario Designer app
  • Create scenarios with roads, actors, and sensors
  • Simulate and visualize scenarios
  • Generate detections and export scenarios
  • Test algorithms with scenarios

Level: Intermediate

Prerequisites:

Duration: 2 days

Languages: English, 한국어

View schedule and enroll