Average precision vs precision in evaluateDetectionPrecision
2 views (last 30 days)
Show older comments
Hi there!
I'm using Faster R-CNN for object detection and I'm trying to evaluate the results so I can assess which hyper-parameters work best. I'm having an issue with understanding the average precision ('ap') output of the 'evaluateDetectionPrecision' function.
Let's look at a specific example. There're just 2 classes (i.e. foreground and background). Suppose that I have an image with TP = 4, FP = 12, FN = 0 (IoU threshold = 0.5)
I know that precision is calculated as follows:
Precision = TP / (TP + FP)
So, in this case, we should get Precision = 4/16 = 0.25
However, when I'm using 'evaluateDetectionPrecision' function, I'm getting an average precision (for this single image) of 0.8304.
I can't understand the difference between those two measures. Shouldn't they be the same? Am I missing something here?
After debugging the whole process, I found the function responsible for calculating 'ap' but I still don't know why it generates contradictory results. I thought that precision is a standard measure and there's just one formula for that.
The code of that function looks as follows:
function [ap, precision, recall] = detectorPrecisionRecall(labels, scores, numExpected)
% Compute average precision metric for detector results. Follows
% PASCAL VOC 2011 average precision metric. labels greater than
% zero are for a positive samples and smaller than zero for
% negative samples.
if (isempty(labels) || numExpected == 0)
ap = 0;
precision = 1;
recall = 0;
return;
end
[~, idx] = sort(scores, 'descend');
labels = labels(idx);
tp = labels > 0;
fp = labels <= 0;
tp = cumsum(tp);
fp = cumsum(fp);
precision = tp ./ (tp + fp);
recall = tp ./ numExpected;
% Change in recall for every true positive.
deltaRecall = 1/numExpected;
ap = sum( precision .* (labels>0) ) * deltaRecall;
% By convention, start precision at 1 and recall at 0
precision = [1; precision];
recall = [0; recall];
0 Comments
Answers (1)
Kieu Tran
on 14 Jun 2018
Hi Karol,
Did you figure out why the AP values are different yet? I think the reason why Matlab gave you such a high AP value is because it summarized the shape of your precision/recall curve. I don't know the formula that Matlab used, but they could have used the 11 point interpolated average precision formula...
0 Comments
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!