# Simple motion detector (in binary image) (Simulink)

6 views (last 30 days)
Domi on 21 May 2020
Answered: Ryan Comeau on 29 May 2020
I do some image processing in SIMULINK over a live video feed coming from a raspberry camera where I create a binary image based on sobel edge detection followed by a closing operation.
Now while a movable object in the video frame is also a white filled circle in the image I would like to detect its motion. I need to have some sort of tracking over the first 5 frames or points such that I can use a linear regression to calculate its next movements. I need to estimate the movement to be able to replan a path for my robot-project, if the movable object comes across the path of my robot. basically need 3 to 5 image points of the object -> s.t. it should create an 2x3/2x5 Matrix with those points for the u- and v- coordinate of the tracked object.
The method should not take much computation time/ressource bc it's running on a raspberry pi model 3.
Is there a quite simpel or easy way to do it? I
I have seen the example with the foreground detection from the computer vision toolbox but it looks like to have variable size matrix as well as only works with the non processed image.
Thanks and best regards.

Ryan Comeau on 29 May 2020
Hello,
The problem of frame to frame automous tracking that you're talking about is a interesting one. Here is a paper: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.49.599&rep=rep1&type=pdf in how it's used for cars. I encourage you to hop on google scholar and search "track association in computer vision".
For your application, you'll need to balance the interpolation of the position in the future, with a confident fit of your curreny data. What I mean, is if you're assuming the frame to frame measures are linear(assuming the object is going to move a certain number of pixels in your image frame from a given time step), you'll need to associate them together with a linear regressor. If this linear regressor does not have a strong correlation coefficient, your track association may be weak and interpolations of future data may be poor.
You'll also need to segment what's detected in your image frame, if you have multiple things that require track association. This increased the computing burden a lot because we now need to find to optimal linear tracks for two objects. I'll leave the segmentation to you(i'll propose one idea here, but if you're unfamiliar with deep learning it may be a little much). PLease note that this code is just a framework for you to start, not a whole solutions, you'll need to run it and fill in some blanks for your application.
image=rescale(image,0,1); %helps in binarization
BW_im=imbinarize(image);
jj=regionprops(BW_im,{'Centroid','MajorAxisLength','MinorAxisLength'});
%you say a bunch of white circles are here so let's make a SOM(self organising map)
%to segment them as best as possible.
%transform struct array jj to matrix
states=[cat(1,jj.Centroid),cat(1,jj.MajorAxisLength),cat(1,jj.MinorAxisLength)];
%define the SOM
dimension1 = som_size; %dimensions can be different
dimension2 = som_size;
net = selforgmap([dimension1 dimension2]);
%train the SOM
net.trainParam.showWindow = 0; %make ==1 if you wan't to see it
[net,tr] = train(net,states(:,1:4)');
outputs = net(states(:,1:4)');
%output vector of SOM
classes = vec2ind(outputs);
classified_outputs=[states,classes'];
%so we now have classed for all objects, let's separate them
[ff,~]=size(states);
for i=1:ff
if classified_outputs(end)==i
data_vault(i).centroid(end+1,:)=classified_outputs(i,1:2) %initialize this variable somewhere
end
end
So you now have sorted data to make some interpolations.
Hope this helps
RC