constructing a background from a sequence of images

8 views (last 30 days)
I have a sequence containing 10 reconyx camera trap images, where I want to identify the animal present in the image in the foreground. The problem here is that each image contains an animal, and I cannot think of a way to construct a background from these images that does not contain an animal. is there a way to construct "a general background model" from this 10 image sequence, that I can use later to subtract from each image to arrive at a contour of the animal present in the photo. attached is the 10 image sequence.
I've tried the basic vision.ForegroundDetector method (to no avail). Logically, if I can arrive at a "reasonable" common background of the sequence, then subtracting the background would not be a problem going through frame by frame to arrive at each image's foreground.
Thank you in advance for your tips and help.

Accepted Answer

Mark Sherstan
Mark Sherstan on 30 Nov 2018
Place all your images in a folder called "imageFolder" which is in the same directory as your scripts and functions. Run this code:
imageFolder = fullfile('imageFolder');
imgs = imageDatastore(imageFolder);
imgs.ReadFcn = @(filename)readAndPreprocessImage(filename);
figure(1)
for ii = 1:length(imgs.Files)
I = readimage(imgs,ii);
[BW,maskedImage] = segmentImage(I);
stats = regionprops(BW,'BoundingBox');
RGB = insertObjectAnnotation(I,'rectangle',stats.BoundingBox,'Animal');
imshow(RGB)
end
Some of the additional functions you will need are:
function Iout = readAndPreprocessImage(filename)
I = imread(filename);
Iout = imcrop(I,[17.5 47.5 2025 1410]);
end
and...
function [BW,maskedImage] = segmentImage(RGB)
%segmentImage Segment image using auto-generated code from imageSegmenter app
% [BW,MASKEDIMAGE] = segmentImage(RGB) segments image RGB using
% auto-generated code from the imageSegmenter app. The final segmentation
% is returned in BW, and a masked image is returned in MASKEDIMAGE.
% Auto-generated by imageSegmenter app on 29-Nov-2018
%----------------------------------------------------
% Convert RGB image into L*a*b* color space.
X = rgb2lab(RGB);
% Graph cut
foregroundInd = [2356190 2356211 2356230 2356239 2356249 2358998 2359082 2361813 2366142 2371678 2378843 2388601 2395647 2408332 2418358 2428226 2430884 2447807 2454860 2455021 2470369 2477426 2480251 2484486 2487308 2487462 2490132 2504249 2526959 2533887 2546701 2559292 2563525 2563621 2586108 2586183 2605928 2612922 2622853 2635500 2642595 2655261 2655287 2659515 2662330 2665145 ];
backgroundInd = [407746 437342 447208 452847 541604 590915 674089 812327 940726 1016913 1105773 1221447 1335731 1435910 1554413 1646128 1695529 1759036 1811259 1814074 1814076 1814079 1949558 2164058 2306597 2309795 2312617 2316850 2359176 2375748 2415590 2435012 2484376 2484689 2543617 2543919 2598917 2608490 2642635 2681820 2689172 2714556 2738230 2741342 2753735 2763593 2777682 2778009 2797417 2817150 2823133 2836867 2846704 ];
L = superpixels(X,14393,'IsInputLab',true);
% Convert L*a*b* range to [0 1]
scaledX = prepLab(X);
BW = lazysnapping(scaledX,L,foregroundInd,backgroundInd);
% Create masked image.
maskedImage = RGB;
maskedImage(repmat(~BW,[1 1 3])) = 0;
end
function out = prepLab(in)
% Convert L*a*b* image to range [0,1]
out = in;
out(:,:,1) = in(:,:,1) / 100; % L range is [0 100].
out(:,:,2:3) = (in(:,:,2:3) + 100) / 200; % a* and b* range is [-100,100].
end
The results arent great but you can fine tune the image segmentation by following this tutorial here (I just did it quickly and I dont know what the rest of your data looks like). Here is a snapshot of one of the photos locating the animal with a bounding box (you could also have the image be cropped to be passed to a classifier (e.g. neural net) or however you want to process the data further.
Screen Shot 2018-11-29 at 11.13.55 PM.png

More Answers (1)

Image Analyst
Image Analyst on 30 Nov 2018
If you have enough frames, you could try just computing the median image. This should work as long as no animal is parked in one spot for so long that essentially it has become part of the background itself.
  4 Comments
Greg Heath
Greg Heath on 4 Dec 2018
Since the median image looks good, I'd be curious about the result of just averaging the images
Greg
Image Analyst
Image Analyst on 4 Dec 2018
If we can assume that there are lots of images with no animals, then the average might be better. However I think the camera only snaps photos when there are animals in view (at least game cameras I've looked at operate this way), so chances are the pixels, for some portion of the image, will be "animal" rather than "background" for some portion of the total number of frames. Some "game" cameras can also shoot videos when animals are in view - it just depends on if you set it up to take still photos or videos.
If the animals alway occupy some part of the image, then they might prevent a good estimate of the background since they are in there, while if they're in that spot for less than half the images, they wouldn't. I think the median would be less susceptible to corruption by an animal being in some frames than the mean.
If using a video, to get the true background you'd count on the animal moving around so that no spot is covered by the animal for more than half the frames.

Sign in to comment.

Categories

Find more on Agriculture in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!