TreeBagger gives different results depending on 'oobvarimp' being 'on' or 'off'
2 views (last 30 days)
Show older comments
The turning oobvarimp option 'on' or 'off' is only supposed to change the computed measure of variable importance. It should not change the classification itself.
However, I have recently realized that it also produces a different classification. Below my code, and the resulting confusion matrix:
First, I run TreeBagger with exact same data and options, except for the oobvarimp status (on/off)
Here is the 'off' version
RandStream.setDefaultStream(RandStream('mlfg6331_64','seed',27));
model2roff = TreeBagger(400, Xr1, Y1, 'Method', 'classification', 'oobpred', 'on', 'oobvarimp', 'off', 'nprint', 100, 'MinLeaf', 1, 'prior', 'equal', 'cost', cost, 'categorical', find(iscatr));
Here is the 'on' version
RandStream.setDefaultStream(RandStream('mlfg6331_64','seed',27));
model2ron = TreeBagger(400, Xr1, Y1, 'Method', 'classification', 'oobpred', 'on', 'oobvarimp', 'on', 'nprint', 100, 'MinLeaf', 1, 'prior', 'equal', 'cost', cost, 'categorical', find(iscatr));
I then compute the confusion matrices using the following code, using first model2ron then model2roff. In theory, these should be identical. The same TreeBagger model should have been created with both 'off' and 'on' options. The only thing that should have changed is that the model should store a different measure of variable importance. This should not effect classification performance (using identical data, variables, etc...)
[pred_model2r_oobY1, pred_model2r_oobY1scores] = oobPredict(model2r);
[conf, classorder] = confusionmat(Y1, pred_model2r_oobY1,'order',classorder);
disp(dataset({conf,classorder{:}}, 'obsnames', classorder));
So, here are the results:
First, with oobvarimp 'off'
pos_outcome neg_outcome
pos_outcome 104 21
neg_outcome 23 62
Next, with oobvarimp 'on'
pos_outcome neg_outcome
pos_outcome 99 26
neg_outcome 30 55
You can see that there has been a significant change (even a small one would be problematic since the forests should be identical).
Has anyone else observed this? Does anyone (Ilya Narsky) have an explanation?
0 Comments
Accepted Answer
Ilya
on 16 Nov 2011
Computing variable importance by permuting observations across every variable (that's what you get when you set oobvarimp to on) requires more runs of the random number generator. That's why the results are not identical.
oobvarimp does not change the classification in a meaningful way. What you observe are statistical fluctuations.
More Answers (1)
Ilya
on 16 Nov 2011
One way of assessing what is high and low under these circumstances would be to look at the classification error. It can be modeled as a binomial random variable. Here is what I get:
>> N = 104+21+23+62
N = 210
>> X1 = 23+21
X1 = 44
>> [phat,pci] = binofit(X1,N)
phat = 0.2095
pci = 0.1566 0.2709
>> X2 = 30+26
X2 = 56
>> e2 = X2/N
e2 = 0.2667
e2 is at the boundary of the 95% confidence interval.
In general, your classification result forms a 2-by-2 contingency table. Standard tools for analysis of contingency tables can be applied here. You could set up a formal test for things you might find more meaningful in your analysis.
You could take a look at the distribution of the scores returned by TreeBagger. I suspect that a good fraction of them would be close to the decision boundary (0.5). Although TreeBagger assigns them to one class or another, these are not confident classifications. I don't know how you want to apply your model for predicting on new data. Perhaps you could choose not to assign examples in the grey area (with scores near 0.5) to any class.
Instead of relying on the assumptions used for analysis of contingency tables, you could run many simulations and inspect the empirical distribution of the classification error or whatnot. You may not find anything interesting, but this wouldn't hurt.
0 Comments
See Also
Categories
Find more on Classification Ensembles in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!