Main Content

Tracking and Motion Estimation

Optical flow, activity recognition, motion estimation, object re-identification, and tracking

Motion estimation and tracking are key activities in many computer vision applications, including activity recognition, traffic monitoring, automotive safety, and surveillance.

Computer Vision Toolbox™ provides video tracking algorithms, such as continuously adaptive mean shift (CAMShift) and Kanade-Lucas-Tomasi (KLT). You can use these algorithms for tracking a single object, or as building blocks in a more complex tracking system. The toolbox also provides a framework for multiple object tracking that includes a Kalman filter and using the Hungarian algorithm for assigning object detections to tracks.

Motion estimation is the process of determining the movement of blocks between adjacent video frames. This toolbox includes motion estimation algorithms, such as optical flow, block matching, and template matching. These algorithms create motion vectors, which can relate to the whole image, blocks, arbitrary patches, or individual pixels. For block and template matching, the evaluation metrics for finding the best match include mean square error (MSE), mean absolute deviation (MAD), maximum absolute difference (MaxAD), sum of absolute difference (SAD), and sum of squared difference (SSD).

Functions

expand all

vision.BinaryFileReaderRead video data from binary files
vision.BinaryFileWriterWrite binary video data to files
vision.DeployableVideoPlayerDisplay video
vision.VideoPlayerPlay video or display image
vision.VideoFileWriterWrite video frames and audio samples to video file
VideoReaderCreate object to read video files
assignDetectionsToTracksAssign detections to tracks for multiobject tracking
bbox2pointsConvert rectangle to corner points list
configureKalmanFilterCreate Kalman filter for object tracking
vision.KalmanFilterCorrection of measurement, state, and state estimation error covariance
vision.HistogramBasedTrackerHistogram-based object tracking
vision.PointTrackerTrack points in video using Kanade-Lucas-Tomasi (KLT) algorithm
vision.BlockMatcherEstimate motion between images or video frames
vision.TemplateMatcherLocate template in image
reidentificationNetworkRe-identification deep learning network for re-identifying and tracking objects (Since R2024a)
extractReidentificationFeaturesExtract object re-identification (ReID) features from image (Since R2024a)
trainReidentificationNetworkTrain re-identification (ReID) deep learning network (Since R2024a)
evaluateReidentificationNetworkEvaluate re-identification network using cumulative matching characteristic (CMC) and mean average precision (mAP) metrics (Since R2024a)
reidentificationMetricsRe-identification (ReID) quality metrics (Since R2024a)
opticalFlowObject for storing optical flow matrices
opticalFlowRAFTEstimate optical flow using RAFT deep learning algorithm (Since R2024b)
opticalFlowFarnebackObject for estimating optical flow using Farneback method
opticalFlowHSObject for estimating optical flow using Horn-Schunck method
opticalFlowLKObject for estimating optical flow using Lucas-Kanade method
opticalFlowLKDoGObject for estimating optical flow using Lucas-Kanade derivative of Gaussian method
vision.BlockMatcherEstimate motion between images or video frames
vision.TemplateMatcherLocate template in image
insertMarkerInsert markers in image or video
insertShapeInsert shapes in image or video
insertObjectAnnotationAnnotate truecolor or grayscale image or video
insertTextInsert text in image or video
imshowDisplay image
imshowpairCompare differences between images

Topics

Featured Examples