# kfoldPredict

Classify observations in cross-validated ECOC model

## Syntax

``label = kfoldPredict(CVMdl)``
``label = kfoldPredict(CVMdl,Name,Value)``
``````[label,NegLoss,PBScore] = kfoldPredict(___)``````
``````[label,NegLoss,PBScore,Posterior] = kfoldPredict(___)``````

## Description

example

````label = kfoldPredict(CVMdl)` returns class labels predicted by the cross-validated ECOC model (`ClassificationPartitionedECOC`) `CVMdl`. For every fold, `kfoldPredict` predicts class labels for observations that it holds out during training. `CVMdl.X` contains both sets of observations.The software predicts the classification of an observation by assigning the observation to the class yielding the largest negated average binary loss (or, equivalently, the smallest average binary loss).```

example

````label = kfoldPredict(CVMdl,Name,Value)` returns predicted class labels with additional options specified by one or more name-value pair arguments. For example, specify the posterior probability estimation method, decoding scheme, or verbosity level.```

example

``````[label,NegLoss,PBScore] = kfoldPredict(___)``` additionally returns negated values of the average binary loss per class (`NegLoss`) for validation-fold observations and positive-class scores (`PBScore`) for validation-fold observations classified by each binary learner, using any of the input argument combinations in the previous syntaxes.If the coding matrix varies across folds (that is, the coding scheme is `sparserandom` or `denserandom`), then `PBScore` is empty (`[]`).```

example

``````[label,NegLoss,PBScore,Posterior] = kfoldPredict(___)``` additionally returns posterior class probability estimates for validation-fold observations (`Posterior`).To obtain posterior class probabilities, you must set `'FitPosterior',1` when training the cross-validated ECOC model using `fitcecoc`. Otherwise, `kfoldPredict` throws an error.```

## Examples

collapse all

Load Fisher's iris data set. Specify the predictor data `X`, the response data `Y`, and the order of the classes in `Y`.

```load fisheriris X = meas; Y = categorical(species); classOrder = unique(Y); rng(1); % For reproducibility```

Train and cross-validate an ECOC model using support vector machine (SVM) binary classifiers. Standardize the predictor data using an SVM template, and specify the class order.

```t = templateSVM('Standardize',1); CVMdl = fitcecoc(X,Y,'CrossVal','on','Learners',t,'ClassNames',classOrder);```

`CVMdl` is a `ClassificationPartitionedECOC` model. By default, the software implements 10-fold cross-validation. You can specify a different number of folds using the `'KFold'` name-value pair argument.

Predict the validation-fold labels. Print a random subset of true and predicted labels.

```labels = kfoldPredict(CVMdl); idx = randsample(numel(labels),10); table(Y(idx),labels(idx),... 'VariableNames',{'TrueLabels','PredictedLabels'})```
```ans=10×2 table TrueLabels PredictedLabels __________ _______________ setosa setosa versicolor versicolor setosa setosa virginica virginica versicolor versicolor setosa setosa virginica virginica virginica virginica setosa setosa setosa setosa ```

`CVMdl` correctly labels the validation-fold observations with indices `idx`.

Load Fisher's iris data set. Specify the predictor data `X`, the response data `Y`, and the order of the classes in `Y`.

```load fisheriris X = meas; Y = categorical(species); classOrder = unique(Y); % Class order K = numel(classOrder); % Number of classes rng(1); % For reproducibility```

Train and cross-validate an ECOC model using SVM binary classifiers. Standardize the predictor data using an SVM template, and specify the class order.

```t = templateSVM('Standardize',1); CVMdl = fitcecoc(X,Y,'CrossVal','on','Learners',t,'ClassNames',classOrder);```

`CVMdl` is a `ClassificationPartitionedECOC` model. By default, the software implements 10-fold cross-validation. You can specify a different number of folds using the `'KFold'` name-value pair argument.

SVM scores are signed distances from the observation to the decision boundary. Therefore, the domain is $\left(-\infty ,\infty \right)$. Create a custom binary loss function that:

• Maps the coding design matrix (M) and positive-class classification scores (s) for each learner to the binary loss for each observation

• Uses linear loss

• Aggregates the binary learner loss using the median

You can create a separate function for the binary loss function, and then save it on the MATLAB® path. Alternatively, you can specify an anonymous binary loss function. In this case, create a function handle (`customBL`) to an anonymous binary loss function.

`customBL = @(M,s)nanmedian(1 - bsxfun(@times,M,s),2)/2;`

Predict cross-validation labels and estimate the median binary loss per class. Print the median negative binary losses per class for a random set of 10 validation-fold observations.

```[label,NegLoss] = kfoldPredict(CVMdl,'BinaryLoss',customBL); idx = randsample(numel(label),10); classOrder```
```classOrder = 3x1 categorical setosa versicolor virginica ```
```table(Y(idx),label(idx),NegLoss(idx,:),'VariableNames',... {'TrueLabel','PredictedLabel','NegLoss'})```
```ans=10×3 table TrueLabel PredictedLabel NegLoss __________ ______________ _________________________________ setosa versicolor 0.37132 2.1288 -4.0001 versicolor versicolor -1.2167 0.36696 -0.65031 setosa versicolor 0.23923 2.0796 -3.8188 virginica virginica -1.9151 -0.19953 0.61467 versicolor versicolor -1.3746 0.45534 -0.58077 setosa versicolor 0.20073 2.2774 -3.9781 virginica versicolor -1.492 0.090122 -0.098107 virginica virginica -1.7666 -0.13463 0.40122 setosa versicolor 0.19994 1.9111 -3.6111 setosa versicolor 0.16112 1.9683 -3.6295 ```

The order of the columns corresponds to the elements of `classOrder`. The software predicts the label based on the maximum negated loss. The results indicate that the median of the linear losses might not perform as well as other losses.

Load Fisher's iris data set. Use the petal dimensions as the predictor data `X`. Specify the response data `Y` and the order of the classes in `Y`.

```load fisheriris X = meas(:,3:4); Y = categorical(species); classOrder = unique(Y); rng(1); % For reproducibility```

Create an SVM template. Standardize the predictors, and specify the Gaussian kernel.

`t = templateSVM('Standardize',1,'KernelFunction','gaussian');`

`t` is an SVM template. Most of its properties are empty. When training the ECOC classifier, the software sets the applicable properties to their default values.

Train and cross-validate an ECOC classifier using the SVM template. Transform classification scores to class posterior probabilities (returned by `kfoldPredict`) using the `'FitPosterior'` name-value pair argument. Specify the class order.

```CVMdl = fitcecoc(X,Y,'Learners',t,'CrossVal','on','FitPosterior',true,... 'ClassNames',classOrder);```

`CVMdl` is a `ClassificationPartitionedECOC` model. By default, the software uses 10-fold cross-validation.

Predict the validation-fold class posterior probabilities. Use 10 random initial values for the Kullback-Leibler algorithm.

`[label,~,~,Posterior] = kfoldPredict(CVMdl,'NumKLInitializations',10);`

The software assigns an observation to the class that yields the smallest average binary loss. Because all the binary learners compute posterior probabilities, the binary loss function is `quadratic`.

Display a random set of results.

```idx = randsample(size(X,1),10); CVMdl.ClassNames```
```ans = 3x1 categorical setosa versicolor virginica ```
```table(Y(idx),label(idx),Posterior(idx,:),... 'VariableNames',{'TrueLabel','PredLabel','Posterior'})```
```ans=10×3 table TrueLabel PredLabel Posterior __________ __________ ______________________________________ versicolor versicolor 0.0086404 0.98243 0.0089302 versicolor virginica 2.2197e-14 0.12448 0.87552 setosa setosa 0.999 0.00022837 0.00076884 versicolor versicolor 2.2194e-14 0.98916 0.010845 virginica virginica 0.01232 0.012926 0.97475 virginica virginica 0.0015569 0.0015636 0.99688 virginica virginica 0.0042886 0.0043547 0.99136 setosa setosa 0.999 0.00028329 0.00071382 virginica virginica 0.0094727 0.0098238 0.9807 setosa setosa 0.999 0.00013558 0.00086196 ```

The columns of `Posterior` correspond to the class order of `CVMdl.ClassNames`.

Train a multiclass ECOC model and estimate the posterior probabilities using parallel computing.

Load the `arrhythmia` data set. Examine the response data `Y`.

```load arrhythmia Y = categorical(Y); tabulate(Y)```
``` Value Count Percent 1 245 54.20% 2 44 9.73% 3 15 3.32% 4 15 3.32% 5 13 2.88% 6 25 5.53% 7 3 0.66% 8 2 0.44% 9 9 1.99% 10 50 11.06% 14 4 0.88% 15 5 1.11% 16 22 4.87% ```
```n = numel(Y); K = numel(unique(Y));```

Several classes are not represented in the data, and many of the other classes have low relative frequencies.

Specify an ensemble learning template that uses the GentleBoost method and 50 weak classification tree learners.

`t = templateEnsemble('GentleBoost',50,'Tree');`

`t` is a template object. Most of the options are empty (`[]`). The software uses default values for all empty options during training.

Because the response variable contains many classes, specify a sparse random coding design.

```rng(1); % For reproducibility Coding = designecoc(K,'sparserandom');```

Train and cross-validate an ECOC model using parallel computing. Fit posterior probabilities (returned by `kfoldPredict`).

`pool = parpool; % Invokes workers`
```Starting parallel pool (parpool) using the 'local' profile ... connected to 6 workers. ```
```options = statset('UseParallel',1); CVMdl = fitcecoc(X,Y,'Learner',t,'Options',options,'Coding',Coding,... 'FitPosterior',1,'CrossVal','on');```
```Warning: One or more folds do not contain points from all the groups. ```

`CVMdl` is a `ClassificationPartitionedECOC` model. By default, the software implements 10-fold cross-validation. You can specify a different number of folds using the `'KFold'` name-value pair argument.

The pool invokes six workers, although the number of workers might vary among systems. Because some classes have low relative frequency, one or more folds most likely do not contain observations from all classes.

Estimate posterior probabilities, and display the posterior probability of being classified as not having arrhythmia (class 1) given the data for a random set of validation-fold observations.

```[~,~,~,posterior] = kfoldPredict(CVMdl,'Options',options); idx = randsample(n,10); table(idx,Y(idx),posterior(idx,1),... 'VariableNames',{'OOFSampleIndex','TrueLabel','PosteriorNoArrhythmia'})```
```ans=10×3 table OOFSampleIndex TrueLabel PosteriorNoArrhythmia ______________ _________ _____________________ 171 1 0.33654 221 1 0.85135 72 16 0.9174 3 10 0.025649 202 1 0.8438 243 1 0.9435 18 1 0.81198 49 6 0.090154 234 1 0.61625 315 1 0.97187 ```

## Input Arguments

collapse all

Cross-validated ECOC model, specified as a `ClassificationPartitionedECOC` model. You can create a `ClassificationPartitionedECOC` model in two ways:

### Name-Value Arguments

Specify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`, where `Name` is the argument name and `Value` is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose `Name` in quotes.

Example: `kfoldPredict(CVMdl,'PosteriorMethod','qp')` specifies to estimate multiclass posterior probabilities by solving a least-squares problem using quadratic programming.

Binary learner loss function, specified as the comma-separated pair consisting of `'BinaryLoss'` and a built-in loss function name or function handle.

• This table describes the built-in functions, where yj is the class label for a particular binary learner (in the set {–1,1,0}), sj is the score for observation j, and g(yj,sj) is the binary loss formula.

ValueDescriptionScore Domaing(yj,sj)
`'binodeviance'`Binomial deviance(–∞,∞)log[1 + exp(–2yjsj)]/[2log(2)]
`'exponential'`Exponential(–∞,∞)exp(–yjsj)/2
`'hamming'`Hamming[0,1] or (–∞,∞)[1 – sign(yjsj)]/2
`'hinge'`Hinge(–∞,∞)max(0,1 – yjsj)/2
`'linear'`Linear(–∞,∞)(1 – yjsj)/2
`'logit'`Logistic(–∞,∞)log[1 + exp(–yjsj)]/[2log(2)]
`'quadratic'`Quadratic[0,1][1 – yj(2sj – 1)]2/2

The software normalizes binary losses so that the loss is 0.5 when yj = 0. Also, the software calculates the mean binary loss for each class.

• For a custom binary loss function, for example `customFunction`, specify its function handle `'BinaryLoss',@customFunction`.

`customFunction` has this form:

`bLoss = customFunction(M,s)`

• `M` is the K-by-B coding matrix stored in `Mdl.CodingMatrix`.

• `s` is the 1-by-B row vector of classification scores.

• `bLoss` is the classification loss. This scalar aggregates the binary losses for every learner in a particular class. For example, you can use the mean binary loss to aggregate the loss over the learners for each class.

• K is the number of classes.

• B is the number of binary learners.

For an example of passing a custom binary loss function, see Predict Test-Sample Labels of ECOC Model Using Custom Binary Loss Function.

The default `BinaryLoss` value depends on the score ranges returned by the binary learners. This table identifies what some default `BinaryLoss` values are when you use the default score transform (`ScoreTransform` property of the model is `'none'`).

AssumptionDefault Value

All binary learners are any of the following:

• Classification decision trees

• Discriminant analysis models

• k-nearest neighbor models

• Naive Bayes models

`'quadratic'`
All binary learners are SVMs.`'hinge'`
All binary learners are ensembles trained by `AdaboostM1` or `GentleBoost`.`'exponential'`
All binary learners are ensembles trained by `LogitBoost`.`'binodeviance'`
You specify to predict class posterior probabilities by setting `'FitPosterior',true` in `fitcecoc`.`'quadratic'`
Binary learners are heterogeneous and use different loss functions.`'hamming'`

To check the default value, use dot notation to display the `BinaryLoss` property of the trained model at the command line.

Example: `'BinaryLoss','binodeviance'`

Data Types: `char` | `string` | `function_handle`

Decoding scheme that aggregates the binary losses, specified as the comma-separated pair consisting of `'Decoding'` and `'lossweighted'` or `'lossbased'`. For more information, see Binary Loss.

Example: `'Decoding','lossbased'`

Number of random initial values for fitting posterior probabilities by Kullback-Leibler divergence minimization, specified as the comma-separated pair consisting of `'NumKLInitializations'` and a nonnegative integer scalar.

If you do not request the fourth output argument (`Posterior`) and set `'PosteriorMethod','kl'` (the default), then the software ignores the value of `NumKLInitializations`.

For more details, see Posterior Estimation Using Kullback-Leibler Divergence.

Example: `'NumKLInitializations',5`

Data Types: `single` | `double`

Estimation options, specified as the comma-separated pair consisting of `'Options'` and a structure array returned by `statset`.

To invoke parallel computing:

• You need a Parallel Computing Toolbox™ license.

• Specify `'Options',statset('UseParallel',true)`.

Posterior probability estimation method, specified as the comma-separated pair consisting of `'PosteriorMethod'` and `'kl'` or `'qp'`.

• If `PosteriorMethod` is `'kl'`, then the software estimates multiclass posterior probabilities by minimizing the Kullback-Leibler divergence between the predicted and expected posterior probabilities returned by binary learners. For details, see Posterior Estimation Using Kullback-Leibler Divergence.

• If `PosteriorMethod` is `'qp'`, then the software estimates multiclass posterior probabilities by solving a least-squares problem using quadratic programming. You need an Optimization Toolbox™ license to use this option. For details, see Posterior Estimation Using Quadratic Programming.

• If you do not request the fourth output argument (`Posterior`), then the software ignores the value of `PosteriorMethod`.

Example: `'PosteriorMethod','qp'`

Verbosity level, specified as the comma-separated pair consisting of `'Verbose'` and `0` or `1`. `Verbose` controls the number of diagnostic messages that the software displays in the Command Window.

If `Verbose` is `0`, then the software does not display diagnostic messages. Otherwise, the software displays diagnostic messages.

Example: `'Verbose',1`

Data Types: `single` | `double`

## Output Arguments

collapse all

Predicted class labels, returned as a categorical or character array, logical or numeric vector, or cell array of character vectors.

`label` has the same data type and number of rows as `CVMdl.Y`.

The software predicts the classification of an observation by assigning the observation to the class yielding the largest negated average binary loss (or, equivalently, the smallest average binary loss).

Negated average binary losses, returned as a numeric matrix. `NegLoss` is an n-by-K matrix, where n is the number of observations (`size(CVMdl.X,1)`) and K is the number of unique classes (`size(CVMdl.ClassNames,1)`).

`NegLoss(i,k)` is the negated average binary loss for classifying observation i into the kth class.

• If `Decoding` is `'lossbased'`, then `NegLoss(i,k)` is the negated sum of the binary losses divided by the total number of binary learners.

• If `Decoding` is `'lossweighted'`, then `NegLoss(i,k)` is the negated sum of the binary losses divided by the number of binary learners for the kth class.

For more details, see Binary Loss.

Positive-class scores for each binary learner, returned as a numeric matrix. `PBScore` is an n-by-B matrix, where n is the number of observations (`size(CVMdl.X,1)`) and B is the number of binary learners (`size(CVMdl.CodingMatrix,2)`).

If the coding matrix varies across folds (that is, the coding scheme is `sparserandom` or `denserandom`), then `PBScore` is empty (`[]`).

Posterior class probabilities, returned as a numeric matrix. `Posterior` is an n-by-K matrix, where n is the number of observations (`size(CVMdl.X,1)`) and K is the number of unique classes (`size(CVMdl.ClassNames,1)`).

You must set `'FitPosterior',1` when training the cross-validated ECOC model using `fitcecoc` in order to request `Posterior`. Otherwise, the software throws an error.

collapse all

### Binary Loss

The binary loss is a function of the class and classification score that determines how well a binary learner classifies an observation into the class.

Suppose the following:

• mkj is element (k,j) of the coding design matrix M—that is, the code corresponding to class k of binary learner j. M is a K-by-B matrix, where K is the number of classes, and B is the number of binary learners.

• sj is the score of binary learner j for an observation.

• g is the binary loss function.

• $\stackrel{^}{k}$ is the predicted class for the observation.

The decoding scheme of an ECOC model specifies how the software aggregates the binary losses and determines the predicted class for each observation. The software supports two decoding schemes:

• Loss-based decoding [3] (`Decoding` is `'lossbased'`) — The predicted class of an observation corresponds to the class that produces the minimum average of the binary losses over all binary learners.

`$\stackrel{^}{k}=\underset{k}{\text{argmin}}\frac{1}{B}\sum _{j=1}^{B}|{m}_{kj}|g\left({m}_{kj},{s}_{j}\right).$`

• Loss-weighted decoding [4] (`Decoding` is `'lossweighted'`) — The predicted class of an observation corresponds to the class that produces the minimum average of the binary losses over the binary learners for the corresponding class.

`$\stackrel{^}{k}=\underset{k}{\text{argmin}}\frac{\sum _{j=1}^{B}|{m}_{kj}|g\left({m}_{kj},{s}_{j}\right)}{\sum _{j=1}^{B}|{m}_{kj}|}.$`

The denominator corresponds to the number of binary learners for class k. [1] suggests that loss-weighted decoding improves classification accuracy by keeping loss values for all classes in the same dynamic range.

The `predict`, `resubPredict`, and `kfoldPredict` functions return the negated value of the objective function of `argmin` as the second output argument (`NegLoss`) for each observation and class.

This table summarizes the supported binary loss functions, where yj is a class label for a particular binary learner (in the set {–1,1,0}), sj is the score for observation j, and g(yj,sj) is the binary loss function.

ValueDescriptionScore Domaing(yj,sj)
`"binodeviance"`Binomial deviance(–∞,∞)log[1 + exp(–2yjsj)]/[2log(2)]
`"exponential"`Exponential(–∞,∞)exp(–yjsj)/2
`"hamming"`Hamming[0,1] or (–∞,∞)[1 – sign(yjsj)]/2
`"hinge"`Hinge(–∞,∞)max(0,1 – yjsj)/2
`"linear"`Linear(–∞,∞)(1 – yjsj)/2
`"logit"`Logistic(–∞,∞)log[1 + exp(–yjsj)]/[2log(2)]
`"quadratic"`Quadratic[0,1][1 – yj(2sj – 1)]2/2

The software normalizes binary losses so that the loss is 0.5 when yj = 0, and aggregates using the average of the binary learners.

Do not confuse the binary loss with the overall classification loss (specified by the `LossFun` name-value argument of the `kfoldLoss` and `kfoldPredict` object functions), which measures how well an ECOC classifier performs as a whole.

## Algorithms

collapse all

The software can estimate class posterior probabilities by minimizing the Kullback-Leibler divergence or by using quadratic programming. For the following descriptions of the posterior estimation algorithms, assume that:

• mkj is the element (k,j) of the coding design matrix M.

• I is the indicator function.

• ${\stackrel{^}{p}}_{k}$ is the class posterior probability estimate for class k of an observation, k = 1,...,K.

• rj is the positive-class posterior probability for binary learner j. That is, rj is the probability that binary learner j classifies an observation into the positive class, given the training data.

### Posterior Estimation Using Kullback-Leibler Divergence

By default, the software minimizes the Kullback-Leibler divergence to estimate class posterior probabilities. The Kullback-Leibler divergence between the expected and observed positive-class posterior probabilities is

`$\Delta \left(r,\stackrel{^}{r}\right)=\sum _{j=1}^{L}{w}_{j}\left[{r}_{j}\mathrm{log}\frac{{r}_{j}}{{\stackrel{^}{r}}_{j}}+\left(1-{r}_{j}\right)\mathrm{log}\frac{1-{r}_{j}}{1-{\stackrel{^}{r}}_{j}}\right],$`

where ${w}_{j}=\sum _{{S}_{j}}{w}_{i}^{\ast }$ is the weight for binary learner j.

• Sj is the set of observation indices on which binary learner j is trained.

• ${w}_{i}^{\ast }$ is the weight of observation i.

The software minimizes the divergence iteratively. The first step is to choose initial values ${\stackrel{^}{p}}_{k}^{\left(0\right)};\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K$ for the class posterior probabilities.

• If you do not specify `'NumKLIterations'`, then the software tries both sets of deterministic initial values described next, and selects the set that minimizes Δ.

• ${\stackrel{^}{p}}_{k}^{\left(0\right)}=1/K;\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K.$

• ${\stackrel{^}{p}}_{k}^{\left(0\right)};\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K$ is the solution of the system

`${M}_{01}{\stackrel{^}{p}}^{\left(0\right)}=r,$`

where M01 is M with all mkj = –1 replaced with 0, and r is a vector of positive-class posterior probabilities returned by the L binary learners [Dietterich et al.]. The software uses `lsqnonneg` to solve the system.

• If you specify `'NumKLIterations',c`, where `c` is a natural number, then the software does the following to choose the set ${\stackrel{^}{p}}_{k}^{\left(0\right)};\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K$, and selects the set that minimizes Δ.

• The software tries both sets of deterministic initial values as described previously.

• The software randomly generates `c` vectors of length K using `rand`, and then normalizes each vector to sum to 1.

At iteration t, the software completes these steps:

1. Compute

`${\stackrel{^}{r}}_{j}^{\left(t\right)}=\frac{\sum _{k=1}^{K}{\stackrel{^}{p}}_{k}^{\left(t\right)}I\left({m}_{kj}=+1\right)}{\sum _{k=1}^{K}{\stackrel{^}{p}}_{k}^{\left(t\right)}I\left({m}_{kj}=+1\cup {m}_{kj}=-1\right)}.$`

2. Estimate the next class posterior probability using

`${\stackrel{^}{p}}_{k}^{\left(t+1\right)}={\stackrel{^}{p}}_{k}^{\left(t\right)}\frac{\sum _{j=1}^{L}{w}_{j}\left[{r}_{j}I\left({m}_{kj}=+1\right)+\left(1-{r}_{j}\right)I\left({m}_{kj}=-1\right)\right]}{\sum _{j=1}^{L}{w}_{j}\left[{\stackrel{^}{r}}_{j}^{\left(t\right)}I\left({m}_{kj}=+1\right)+\left(1-{\stackrel{^}{r}}_{j}^{\left(t\right)}\right)I\left({m}_{kj}=-1\right)\right]}.$`

3. Normalize ${\stackrel{^}{p}}_{k}^{\left(t+1\right)};\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K$ so that they sum to 1.

4. Check for convergence.

For more details, see [Hastie et al.] and [Zadrozny].

### Posterior Estimation Using Quadratic Programming

Posterior probability estimation using quadratic programming requires an Optimization Toolbox license. To estimate posterior probabilities for an observation using this method, the software completes these steps:

1. Estimate the positive-class posterior probabilities, rj, for binary learners j = 1,...,L.

2. Using the relationship between rj and ${\stackrel{^}{p}}_{k}$ [Wu et al.], minimize

`$\sum _{j=1}^{L}{\left[-{r}_{j}\sum _{k=1}^{K}{\stackrel{^}{p}}_{k}I\left({m}_{kj}=-1\right)+\left(1-{r}_{j}\right)\sum _{k=1}^{K}{\stackrel{^}{p}}_{k}I\left({m}_{kj}=+1\right)\right]}^{2}$`

with respect to ${\stackrel{^}{p}}_{k}$ and the restrictions

`$\begin{array}{l}0\le {\stackrel{^}{p}}_{k}\le 1\\ \sum _{k}{\stackrel{^}{p}}_{k}=1.\end{array}$`

The software performs minimization using `quadprog` (Optimization Toolbox).

## References

[1] Allwein, E., R. Schapire, and Y. Singer. “Reducing multiclass to binary: A unifying approach for margin classiﬁers.” Journal of Machine Learning Research. Vol. 1, 2000, pp. 113–141.

[2] Dietterich, T., and G. Bakiri. “Solving Multiclass Learning Problems Via Error-Correcting Output Codes.” Journal of Artificial Intelligence Research. Vol. 2, 1995, pp. 263–286.

[3] Escalera, S., O. Pujol, and P. Radeva. “Separability of ternary codes for sparse designs of error-correcting output codes.” Pattern Recog. Lett., Vol. 30, Issue 3, 2009, pp. 285–297.

[4] Escalera, S., O. Pujol, and P. Radeva. “On the decoding process in ternary error-correcting output codes.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 32, Issue 7, 2010, pp. 120–134.

[5] Hastie, T., and R. Tibshirani. “Classification by Pairwise Coupling.” Annals of Statistics. Vol. 26, Issue 2, 1998, pp. 451–471.

[6] Wu, T. F., C. J. Lin, and R. Weng. “Probability Estimates for Multi-Class Classification by Pairwise Coupling.” Journal of Machine Learning Research. Vol. 5, 2004, pp. 975–1005.

[7] Zadrozny, B. “Reducing Multiclass to Binary by Coupling Probability Estimates.” NIPS 2001: Proceedings of Advances in Neural Information Processing Systems 14, 2001, pp. 1041–1048.

## Version History

Introduced in R2014b