Main Content

oobPredict

Predict out-of-bag labels and scores of bagged classification ensemble

Description

example

[labels,scores] = oobPredict(ens) returns class labels and scores for the out-of-bag data in ens.

[labels,scores] = oobPredict(ens,Name=Value) specifies additional options using one or more name-value arguments in addition to any of the input argument combinations in the previous syntaxes. For example, you can specify the indices of the weak learners to use for calculating the predicted labels and scores, and perform computations in parallel.

Examples

collapse all

Find the out-of-bag predictions and scores for the Fisher iris data. Find the scores with notable uncertainty in the resulting classifications.

Load the sample data set.

load fisheriris

Train an ensemble of bagged classification trees.

ens = fitcensemble(meas,species,'Method','Bag');

Find the out-of-bag predictions and scores.

[label,score] = oobPredict(ens);

Find the scores in the range (0.2,0.8). These scores have notable uncertainty in the resulting classifications.

unsure = ((score > .2) & (score < .8));
sum(sum(unsure))  % Number of uncertain predictions
ans = 16

Input Arguments

collapse all

Bagged classification ensemble model, specified as a ClassificationBaggedEnsemble model object trained with fitcensemble.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: oobPredict(ens,Learners=[1 2 3 5],UseParallel=true) specifies to use the first, second, third, and fifth learners in the ensemble in oobPredict, and to perform computations in parallel.

Indices of weak learners in the ensemble to use in oobPredict, specified as a vector of positive integers in the range [1:ens.NumTrained]. By default, all learners are used.

Example: Learners=[1 2 4]

Data Types: single | double

Flag to run in parallel, specified as a numeric or logical 1 (true) or 0 (false). If you specify UseParallel=true, the oobPredict function executes for-loop iterations by using parfor. The loop runs in parallel when you have Parallel Computing Toolbox™.

Example: UseParallel=true

Data Types: logical

Output Arguments

collapse all

Predicted class labels, returned as a categorical or character array, logical or numeric vector, or cell array of character vectors.

For each observation in X, the predicted class label corresponds to the minimum expected classification cost among all classes. For an observation with NaN scores, the function classifies the observation into the majority class, which makes up the largest proportion of the training labels.

  • The label is the class with the highest score. In case of a tie, the label is earliest in ens.ClassNames.

  • labels has the same data type as the observed class labels (Y) used to train ens. (The software treats string arrays as cell arrays of character vectors.)

  • The length of labels is equal to the number of rows of ens.X.

Class scores, returned as a numeric matrix with one row per observation and one column per class. For each observation and each class, the score represents the confidence that the observation originates from that class. A higher score indicates a higher confidence. Score values are in the range 0 to 1. For more information, see Score (ensemble).

More About

collapse all

Out of Bag

Bagging, which stands for “bootstrap aggregation”, is a type of ensemble learning. To bag a weak learner such as a decision tree on a dataset, fitcensemble generates many bootstrap replicas of the dataset and grows decision trees on these replicas. fitcensemble obtains each bootstrap replica by randomly selecting N observations out of N with replacement, where N is the dataset size. To find the predicted response of a trained ensemble, predict take an average over predictions from individual trees.

Drawing N out of N observations with replacement omits on average 37% (1/e) of observations for each decision tree. These are "out-of-bag" observations. For each observation, oobLoss estimates the out-of-bag prediction by averaging over predictions from all trees in the ensemble for which this observation is out of bag. It then compares the computed prediction against the true response for this observation. It calculates the out-of-bag error by comparing the out-of-bag predicted responses against the true responses for all observations used for training. This out-of-bag average is an unbiased estimator of the true ensemble error.

Score (ensemble)

For ensembles, a classification score represents the confidence of a classification into a class. The higher the score, the higher the confidence.

Different ensemble algorithms have different definitions for their scores. Furthermore, the range of scores depends on ensemble type. For example:

  • AdaBoostM1 scores range from –∞ to ∞.

  • Bag scores range from 0 to 1.

Extended Capabilities

Version History

Introduced in R2012b