Adding a blur to the Face Detection and Tracking Using the KLT Algorithm Script?
7 views (last 30 days)
Show older comments
Simply put I have two scripts that I would like to combine. The first script from matlab Face Detection and Tracking Using the KLT Algorithm works very well. I'm looking to add a blur to the face after the script has detected the face and displays the video results.
The other script I have tracks and blurs the face however, It's based off a webcam which is not my intent. The video is not real time which should make this simple.
The Face Detection and Tracking Using the KLT Algorithm is listed below. After that is the blur script which I'm looking to combine into the Face Detection and Tracking Using the KLT Algorithm.
When the script is done it should be able to track the face and blur out the face as it moves across the screen.
Any help combining these two features would be greatly appreciated. Thanks for your time.
%% Face Detection and Tracking Using the KLT Algorithm % This example shows how to automatically detect and track a face using % feature points. The approach in this example keeps track of the face even % when the person tilts his or her head, or moves toward or away from the % camera. % % Copyright 2014 The MathWorks, Inc.
%% Introduction % Object detection and tracking are important in many computer vision % applications including activity recognition, automotive safety, and % surveillance. In this example, you will develop a simple face tracking % system by dividing the tracking problem into three parts: % % # Detect a face % # Identify facial features to track % # Track the face
%% Detect a Face % First, you must detect the face. Use the vision.CascadeObjectDetector % System object™ to detect the location of a face in a video frame. The % cascade object detector uses the Viola-Jones detection algorithm and a % trained classification model for detection. By default, the detector is % configured to detect faces, but it can be used to detect other types of % objects.
% Create a cascade detector object. faceDetector = vision.CascadeObjectDetector();
% Read a video frame and run the face detector. videoFileReader = vision.VideoFileReader('IMG_7062.MOV'); videoFrame = step(videoFileReader); bbox = step(faceDetector, videoFrame);
% Draw the returned bounding box around the detected face. videoFrame = insertShape(videoFrame, 'Rectangle', bbox); figure; imshow(videoFrame); title('Detected face');
% Convert the first box into a list of 4 points % This is needed to be able to visualize the rotation of the object. bboxPoints = bbox2points(bbox(1, :));
%% % To track the face over time, this example uses the Kanade-Lucas-Tomasi % (KLT) algorithm. While it is possible to use the cascade object detector % on every frame, it is computationally expensive. It may also fail to % detect the face, when the subject turns or tilts his head. This % limitation comes from the type of trained classification model used for % detection. The example detects the face only once, and then the KLT % algorithm tracks the face across the video frames.
%% Identify Facial Features To Track % The KLT algorithm tracks a set of feature points across the video frames. % Once the detection locates the face, the next step in the example % identifies feature points that can be reliably tracked. This example % uses the standard, "good features to track" proposed by Shi and Tomasi.
% Detect feature points in the face region. points = detectMinEigenFeatures(rgb2gray(videoFrame), 'ROI', bbox);
% Display the detected points. figure, imshow(videoFrame), hold on, title('Detected features'); plot(points);
%% Initialize a Tracker to Track the Points % With the feature points identified, you can now use the % vision.PointTracker System object to track them. For each point in the % previous frame, the point tracker attempts to find the corresponding % point in the current frame. Then the estimateGeometricTransform % function is used to estimate the translation, rotation, and scale between % the old points and the new points. This transformation is applied to the % bounding box around the face.
% Create a point tracker and enable the bidirectional error constraint to % make it more robust in the presence of noise and clutter. pointTracker = vision.PointTracker('MaxBidirectionalError', 2);
% Initialize the tracker with the initial point locations and the initial % video frame. points = points.Location; initialize(pointTracker, points, videoFrame);
%% Initialize a Video Player to Display the Results % Create a video player object for displaying video frames. videoPlayer = vision.VideoPlayer('Position',... [100 100 [size(videoFrame, 2), size(videoFrame, 1)]+30]);
%% Track the Face % Track the points from frame to frame, and use % estimateGeometricTransform function to estimate the motion of the face.
% Make a copy of the points to be used for computing the geometric % transformation between the points in the previous and the current frames oldPoints = points;
while ~isDone(videoFileReader) % get the next frame videoFrame = step(videoFileReader);
% Track the points. Note that some points may be lost.
[points, isFound] = step(pointTracker, videoFrame);
visiblePoints = points(isFound, :);
oldInliers = oldPoints(isFound, :);
if size(visiblePoints, 1) >= 2 % need at least 2 points
% Estimate the geometric transformation between the old points
% and the new points and eliminate outliers
[xform, oldInliers, visiblePoints] = estimateGeometricTransform(...
oldInliers, visiblePoints, 'similarity', 'MaxDistance', 4);
% Apply the transformation to the bounding box points
bboxPoints = transformPointsForward(xform, bboxPoints);
% Insert a bounding box around the object being tracked
bboxPolygon = reshape(bboxPoints', 1, []);
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon, ...
'LineWidth', 2);
% Display tracked points
videoFrame = insertMarker(videoFrame, visiblePoints, '+', ...
'Color', 'white');
% Reset the points
oldPoints = visiblePoints;
setPoints(pointTracker, oldPoints);
end
% Display the annotated video frame using the video player object
step(videoPlayer, videoFrame);
end
% Clean up release(videoFileReader); release(videoPlayer); release(pointTracker);
%% Summary % In this example, you created a simple face tracking system that % automatically detects and tracks a single face. Try changing the input % video, and see if you are still able to detect and track a face. Make % sure the person is facing the camera in the initial frame for the % detection step. displayEndOfDemoMessage(filename)
-------------------------------------------------------------- %%%%%% The other script which Blurs the face%%%%%%%%%%%%%%%%
%Display blurred image ,replacing the original image bboxPoints_copy=int32(bboxPoints); target = videoFrame(bboxPoints_copy(2,2):bboxPoints_copy(4,2) , bboxPoints_copy(1,1):bboxPoints_copy(3,1), :); blur_target = blur(target,5); videoFrame(bboxPoints_copy(2,2):bboxPoints_copy(4,2) , bboxPoints_copy(1,1):bboxPoints_copy(3,1), :) = blur_target; end % Reset the points. oldPoints = visiblePoints; setPoints(pointTracker, oldPoints); end end % Display the annotated video frame using the video player object. step(videoPlayer, videoFrame);
% Check whether the video player window has been closed.
runLoop = isOpen(videoPlayer);
end
% Clean up. clear cam; release(videoPlayer); release(pointTracker); release(faceDetector);
0 Comments
Answers (2)
Keshia Peters
on 15 Oct 2016
Any luck, Amy? This combination would be very helpful to me as well in de-identifying hospital gait videos.
0 Comments
Ethan Douglas
on 8 Jun 2018
I don't have an answer for blurring, but I have made a script to "block" faces from data collection videos. Here's what I've been using:
clear;
% Create a cascade detector object.
faceDetector = vision.CascadeObjectDetector();
% Create the point tracker object.
pointTracker = vision.PointTracker('MaxBidirectionalError', 2);
%Ask the user to input filename and extension
prompt = 'What is the filename of the video you would like to block? (Not including file type): ';
file = input(prompt,'s');
prompt = 'What is the file extension (file type, e.g. ".mp4"): ';
extension = input(prompt, 's');
filename = strcat(file,extension);
% Read a video frame and run the face detector.
videoFileReader = vision.VideoFileReader(filename);
% Initiliaze video writer object, saving the file as an avi file with
% "blocked" before the original file name
b = 'blocked_';
videoFileWriter = vision.VideoFileWr
if true
% code
enditer(strcat(b,file,'.avi'),'FrameRate',videoFileReader.info.VideoFrameRate,'VideoCompressor','DV Video Encoder');
videoFrame = step(videoFileReader);
%Set the number of points equal to zero to initialize the tracker
numPts = 0;
%Begin a loop that runs while the video is being read, frame by frame
while ~isDone(videoFileReader)
% Get the next frame.
videoFrame = step(videoFileReader);
%convert frame to gray scale to help find points
videoFrameGray = rgb2gray(videoFrame);
if numPts < 10
% Detection mode. bbox is an n x 4 matrix, with n representing the
% number of "faces" detected, where each row is [xcoord ycoord
% width height] of the face
bbox = faceDetector.step(videoFrameGray);
%If there is a face detected
if ~isempty(bbox)
% Find corner points inside the detected region.
points = detectMinEigenFeatures(videoFrameGray, 'ROI', bbox(1, :));
% Re-initialize the point tracker.
xyPoints = points.Location;
numPts = size(xyPoints,1);
release(pointTracker);
initialize(pointTracker, xyPoints, videoFrameGray);
% Save a copy of the points.
oldPoints = xyPoints;
% Convert the rectangle represented as [x, y, w, h] into an
% M-by-2 matrix of [x,y] coordinates of the four corners. This
% is needed to be able to transform the bounding box to display
% the orientation of the face.
bboxPoints = bbox2points(bbox(1, :));
% Convert the box corners into the [x1 y1 x2 y2 x3 y3 x4 y4]
% format required by insertShape.
bboxPolygon = reshape(bboxPoints', 1, []);
Just block eyes
Display a black polygon over the detected face
videoFrame = insertShape(videoFrame, 'FilledPolygon', bboxPolygon, ...
'LineWidth', 2, 'Opacity', 1, 'Color', 'black');
end
else
% Tracking mode.
[xyPoints, isFound] = step(pointTracker, videoFrameGray);
visiblePoints = xyPoints(isFound, :);
oldInliers = oldPoints(isFound, :);
numPts = size(visiblePoints, 1);
if numPts >= 10
% Estimate the geometric transformation between the old points
% and the new points.
[xform, oldInliers, visiblePoints] = estimateGeometricTransform(...
oldInliers, visiblePoints, 'similarity', 'MaxDistance', 5);
% Apply the transformation to the bounding box.
bboxPoints = transformPointsForward(xform, bboxPoints);
% Convert the box corners into the [x1 y1 x2 y2 x3 y3 x4 y4]
% format required by insertShape.
bboxPolygon = reshape(bboxPoints', 1, []);
%Just block eyes
% %Just block eyes
% bboxPolygon(6) = floor(bboxPolygon(4)+(bboxPolygon(6)-bboxPolygon(4))*.54);
% bboxPolygon(8) = floor(bboxPolygon(2)+(bboxPolygon(8)-bboxPolygon(2))*.54);
% Display a black polygon over the detected face
videoFrame = insertShape(videoFrame, 'FilledPolygon', bboxPolygon, ...
'LineWidth', 2, 'Opacity', 1, 'Color', 'black');
% Reset the points.
oldPoints = visiblePoints;
setPoints(pointTracker, oldPoints);
end
end
%Move on to the next frame
step(videoFileWriter, videoFrame);
end
% Clean up
release(videoFileReader);
release(videoFileWriter);
release(pointTracker);
0 Comments
See Also
Categories
Find more on Tracking and Motion Estimation in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!