- The ROC curve plots the TPR vs FPR.
- How you are calculating the TPR and TNR is not correct:
getting true positive and true negative rates from a neural network classification
5 views (last 30 days)
Show older comments
Hi all,
I am trying to figure out how to get the true positive and true negative rates of a neural networks classifier (patternnet). Below is an example using the cancer dataset which is already in the MATLAB 2015a library. I ran the code below in MATLAB 2015a. The problem is that the ROC plot does not agree at all with the true positive and true negative rates returned by the confusion() function of MATLAB. When I run this code, the ROC plot has an optimal operating point at (0.05,0.97) which is 0.97 TPR and 0.95 TNR. But the TPR and TNR returned by the confusion function (Se1 and Sp1 in the code below) are 0 and 0.63, respectively, and those calculated from the confusion matrix (Se2 and Sp2) are 1 and 0, respectively. Can anyone help me understand why there are discrepancies in the TPR and TNR values from the different functions? I don't know which one is correct?
Thanks a ton.
Hadi
[inputs,targets] = cancer_dataset;
targets = targets(1,:);
nG1 = length(find(targets==1))
nG2 = length(find(targets==0))
setdemorandstream(672880951)
hiddenLayerSize = 10;
net = patternnet(hiddenLayerSize);
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% net.trainParam.showWindow = false;
[net,tr] = train(net,inputs,targets);
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
testX = inputs(:,tr.testInd);
testT = targets(:,tr.testInd);
% plotroc(targets,outputs)
testY = net(testX);
figure
plotroc(testT,testY)
[c,cm,ind,per] = confusion(testT,testY);
Se1 = per(1,3)
Sp1 = per(1,4)
Ac1 = 1-c
N=length(testT);
Se2 = cm(2,2)/(cm(2,2)+cm(2,1))
Sp2 = cm(1,1)/(cm(1,1)+cm(1,2))
Ac2 = (cm(1,1)+cm(2,2))/N
0 Comments
Accepted Answer
Brendan Hamm
on 13 Apr 2016
TPR = cm(1,1)/sum(cm(:,1)) % Division is by elements in the predicted true column
TNR = cm(2,2)/sum(cm(:,2)) % Same issues:
FNR = cm(1,2)/sum(cm(:,2))
FPR = cm(2,1)/sum(cm(:,1))
6 Comments
Brendan Hamm
on 14 Apr 2016
Edited: Brendan Hamm
on 14 Apr 2016
So, we say that malignant is the positive class and benign is the negative class. This statement corresponds to looking at the first row of per. (Note: If we defined this in the opposite manner we would look at the second row).
There are multiple definitions of the True Positive Rate, False Positive Rate, etc. What I mean when I say you are not calculating these correct, I mean you are using an alternative definition (probably the more prevalent today). But, in the documentation for confusion this is expressly mentioned to be:
TPR = #(Predicted positive and is positive)/#(predicted positive). TNR = #(Predicted negative and is negative)/#(predicted negative) The reason for this is it has a straightforward meaning (i.e. TPR is the proportion of the population that will test positive). This representation is useful in conditional probability applications.From these definitions the Sensitivity and Specificity cannot be derived, so they remain what your definition of TPR etc. is.
FNR = cm(1,2)/sum(cm(:,2))
FPR = cm(2,1)/sum(cm(:,1))
TPR = cm(1,1)/sum(cm(:,1))
TNR = cm(2,2)/sum(cm(:,2))
This exactly agrees with the first row of per.
But, the sensitivity and specificity still remain:
Sens = cm(1,1)/sum(cm(1,:))
Spec = cm(2,2)/sum(cm(2,:))
More Answers (0)
See Also
Categories
Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!