This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

vision.TemplateMatcher

Locate template in image

Description

To locate a template in an image.

  1. Create the vision.TemplateMatcher object and set its properties.

  2. Call the object with arguments, as if it were a function.

To learn more about how System objects work, see What Are System Objects? (MATLAB).

Creation

Syntax

tMatcher = vision.TemplateMatcher
tMatcher = vision.TemplateMatcher(Name,Value)

Description

example

tMatcher = vision.TemplateMatcher returns a template matcher System object, tMatcher. This object performs template matching by shifting a template in single-pixel increments throughout the interior of an image.

tMatcher = vision.TemplateMatcher(Name,Value) sets properties using one or more name-value pairs. Enclose each property name in quotes. For example, tMatcher = vision.TemplateMatcher('Metric','Sum of absolute differences')

Properties

expand all

Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the release function unlocks them.

If a property is tunable, you can change its value at any time.

For more information on changing property values, see System Design in MATLAB Using System Objects (MATLAB).

Metric used for template matching, specified as 'Sum of absolute differences', 'Sum of squared differences' , or 'Maximum absolute difference'.

Type of output, specified as 'Metric matrix' or 'Best match location'.

Specify search criteria to find minimum difference between two inputs, specified as 'Exhaustive' or 'Three-step'. If you set this property to 'Exhaustive', the object searches for the minimum difference pixel by pixel. If you set this property to 'Three-step', the object searches for the minimum difference using a steadily decreasing step size. The 'Three-step' method is computationally less expensive than the 'Exhaustive' method, but sometimes does not find the optimal solution. This property applies when you set the OutputValue property to 'Best match location'.

Enable metric values output, specified as true or false. This property applies when you set the OutputValue property to 'Best match location'.

Size of the metric values, specified as an odd number. The size N, of the N-by-N matrix of metric values as an odd number. For example, if the matrix size is 3-by-3 set this property to 3. This property applies when you set the OutputValue property to 'Best match location' and the BestMatchNeighborhoodOutputPort property to true.

Enable ROI specification through input, specified as true or false. Set this property to true to define the Region of Interest (ROI) over which to perform the template matching. If you set this property to true, the ROI must be specified. Otherwise the entire input image is used.

Enable output of a flag indicating if any part of ROI is outside input image, specified as true or false. When you set this property to true, the object returns an ROI flag. The flag, when set to false, indicates a part of the ROI is outside of the input image. This property applies when you set the ROIInputPort property to true

Fixed-Point Properties

Rounding method for fixed-point operations, specified as 'Floor', 'Ceiling', 'Convergent', 'Nearest' , 'Round' , 'Simplest' , or 'Zero'.

Action to take when integer input is out-of-range, specified as 'Wrap' or 'Saturate'.

Product data type, specified as 'Same as input' or 'Custom'.

Product word and fraction lengths, specified as a scaled numerictype object. This property applies only when you set the AccumulatorDataType property to 'Custom'.

Data type of accumulator, specified as 'Same as product', 'Same as input', or 'Custom'.

Accumulator word and fraction lengths, specified as a scaled numerictype object. This property applies only when you set the AccumulatorDataType property to 'Custom'.

Usage

For versions earlier than R2016b, use the step function to run the System object™ algorithm. The arguments to step are the object you created, followed by the arguments shown in this section.

For example, y = step(obj,x) and y = obj(x) perform equivalent operations.

Syntax

location = tMatcher(I,T)
[location,Nvals,Nvalid] = tMatcher(I,T,ROI)
[location,Nvals,Nvalid,ROIvalid] = tMatcher(I,T,ROI)
[location,ROIvalid] = tMatcher(I,T,ROI)

Description

example

location = tMatcher(I,T) computes the [x y] location coordinates, location, of the best template match between the image matrix, I, and the template matrix, T. The output coordinates are relative to the top left corner of the image. The object computes the location by shifting the template in single-pixel increments throughout the interior of the image.

[location,Nvals,Nvalid] = tMatcher(I,T,ROI)returns the location of the best template match location, the metric values around the best match Nvals, and a logical flag Nvalid. This applies when you set the OutputValue property to 'Best match location' and the BestMatchNeighborhoodOutputPort property to true.

[location,Nvals,Nvalid,ROIvalid] = tMatcher(I,T,ROI) also returns a logical flag, ROIvalid to indicate whether the ROI is outside the bounds of the input image I. This applies when you set the OutputValue property to 'Best match location', and the BestMatchNeighborhoodOutputPort, ROIInputPort, and ROIValidityOutputPort properties to true.

[location,ROIvalid] = tMatcher(I,T,ROI)also returns a logical flag ROIvalid indicating if the specified ROI is outside the bounds of the input image I. This applies when you set the OutputValue property to 'Best match location', and both the ROIInputPort and ROIValidityOutputPort properties to true.

Input Arguments

expand all

Input image, specified as either a 2-D grayscale or truecolor image.

Input template, specified as 2-D grayscale or truecolor image.

Input ROI, specified as a four-element vector, [x y width height], where the first two elements represent the coordinates of the upper-left corner of the rectangular ROI.

Output Arguments

expand all

Metric value matrix , specified as a matrix. A false value for Nvalid indicates that the neighborhood around the best match extended outside the borders of the metric value matrix Nvals.

Valid neighborhood, sepcified as true or false. A false value for Nvalid indicates that the neighborhood around the best match extended outside the borders of the metric value matrix Nvals.

Valid ROI neighborhood, specified as true or false. A false value for ROIvalid indicates that the ROI is outside the bounds of the input image.

Object Functions

To use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named obj, use this syntax:

release(obj)

expand all

stepRun System object algorithm
releaseRelease resources and allow changes to System object property values and input characteristics
resetReset internal states of System object

Examples

expand all

This example shows how to remove the effect of camera motion from a video stream.

Introduction

In this example we first define the target to track. In this case, it is the back of a car and the license plate. We also establish a dynamic search region, whose position is determined by the last known target location. We then search for the target only within this search region, which reduces the number of computations required to find the target. In each subsequent video frame, we determine how much the target has moved relative to the previous frame. We use this information to remove unwanted translational camera motions and generate a stabilized video.

Initialization

Create a System object™ to read video from a multimedia file. We set the output to be of intensity only video.

% Input video file which needs to be stabilized.
filename = 'shaky_car.avi';

hVideoSource = vision.VideoFileReader(filename, ...
                                      'ImageColorSpace', 'Intensity',...
                                      'VideoOutputDataType', 'double');

Create a template matcher System object to compute the location of the best match of the target in the video frame. We use this location to find translation between successive video frames.

hTM = vision.TemplateMatcher('ROIInputPort', true, ...
                            'BestMatchNeighborhoodOutputPort', true);

Create a System object to display the original video and the stabilized video.

hVideoOut = vision.VideoPlayer('Name', 'Video Stabilization');
hVideoOut.Position(1) = round(0.4*hVideoOut.Position(1));
hVideoOut.Position(2) = round(1.5*(hVideoOut.Position(2)));
hVideoOut.Position(3:4) = [650 350];

Here we initialize some variables used in the processing loop.

pos.template_orig = [109 100]; % [x y] upper left corner
pos.template_size = [22 18];   % [width height]
pos.search_border = [15 10];   % max horizontal and vertical displacement
pos.template_center = floor((pos.template_size-1)/2);
pos.template_center_pos = (pos.template_orig + pos.template_center - 1);
fileInfo = info(hVideoSource);
W = fileInfo.VideoSize(1); % Width in pixels
H = fileInfo.VideoSize(2); % Height in pixels
BorderCols = [1:pos.search_border(1)+4 W-pos.search_border(1)+4:W];
BorderRows = [1:pos.search_border(2)+4 H-pos.search_border(2)+4:H];
sz = fileInfo.VideoSize;
TargetRowIndices = ...
  pos.template_orig(2)-1:pos.template_orig(2)+pos.template_size(2)-2;
TargetColIndices = ...
  pos.template_orig(1)-1:pos.template_orig(1)+pos.template_size(1)-2;
SearchRegion = pos.template_orig - pos.search_border - 1;
Offset = [0 0];
Target = zeros(18,22);
firstTime = true;

Stream Processing Loop

This is the main processing loop which uses the objects we instantiated above to stabilize the input video.

while ~isDone(hVideoSource)
    input = hVideoSource();

    % Find location of Target in the input video frame
    if firstTime
      Idx = int32(pos.template_center_pos);
      MotionVector = [0 0];
      firstTime = false;
    else
      IdxPrev = Idx;

      ROI = [SearchRegion, pos.template_size+2*pos.search_border];
      Idx = hTM(input,Target,ROI);

      MotionVector = double(Idx-IdxPrev);
    end

    [Offset, SearchRegion] = updatesearch(sz, MotionVector, ...
        SearchRegion, Offset, pos);

    % Translate video frame to offset the camera motion
    Stabilized = imtranslate(input, Offset, 'linear');

    Target = Stabilized(TargetRowIndices, TargetColIndices);

    % Add black border for display
    Stabilized(:, BorderCols) = 0;
    Stabilized(BorderRows, :) = 0;

    TargetRect = [pos.template_orig-Offset, pos.template_size];
    SearchRegionRect = [SearchRegion, pos.template_size + 2*pos.search_border];

    % Draw rectangles on input to show target and search region
    input = insertShape(input, 'Rectangle', [TargetRect; SearchRegionRect],...
                        'Color', 'white');
    % Display the offset (displacement) values on the input image
    txt = sprintf('(%+05.1f,%+05.1f)', Offset);
    input = insertText(input(:,:,1),[191 215],txt,'FontSize',16, ...
                    'TextColor', 'white', 'BoxOpacity', 0);
    % Display video
    hVideoOut([input(:,:,1) Stabilized]);
end

Release

Here you call the release method on the objects to close any open files and devices.

release(hVideoSource);

Conclusion

Using the Computer Vision Toolbox™ functionality from MATLAB® command line it is easy to implement complex systems like video stabilization.

Appendix

The following helper function is used in this example.

Algorithms

Typical use of the template matcher involves finding a small region within a larger image. The region is specified by the template image which can be as large as the input image, but which is typically smaller than the input image.

The object outputs the best match coordinates, relative to the top-left corner of the image. The [x y] coordinates of the location correspond to the center of the template. When you use a template with an odd number of pixels, the object uses the center of the template. When you use a template with an even number of pixels, the object uses the centered upper-left pixel for the location. The following table shows how the object outputs the location (LOC), of odd and even templates:

Odd number of pixels in templateEven number of pixels in template

Extended Capabilities

Introduced in R2012a