# edge

Classification edge for discriminant analysis classifier

## Syntax

``E = edge(mdl,X,Y)``
``E = edge(mdl,X,Y,Weights=w)``

## Description

example

````E = edge(mdl,X,Y)` returns the classification Edge for `mdl` with data `X` and classification `Y`. NoteIf the predictor data `X` contains any missing values, the `edge` function might return NaN. For more details, see edge can return NaN for predictor data with missing values. ```
````E = edge(mdl,X,Y,Weights=w)` computes the weighted classification edge.```

## Examples

collapse all

Compute the classification edge and margin for the Fisher iris data, trained on its first two columns of data, and view the last 10 entries.

```load fisheriris X = meas(:,1:2); obj = fitcdiscr(X,species); E = edge(obj,X,species)```
```E = 0.4980 ```
```M = margin(obj,X,species); M(end-10:end)```
```ans = 11×1 0.6551 0.4838 0.6551 -0.5127 0.5659 0.4611 0.4949 0.1024 0.2787 -0.1439 ⋮ ```

The classifier trained on all the data is better.

```obj = fitcdiscr(meas,species); E = edge(obj,meas,species)```
```E = 0.9454 ```
```M = margin(obj,meas,species); M(end-10:end)```
```ans = 11×1 0.9983 1.0000 0.9991 0.9978 1.0000 1.0000 0.9999 0.9882 0.9937 1.0000 ⋮ ```

## Input Arguments

collapse all

Trained discriminant analysis classifier, specified as a `ClassificationDiscriminant` or `CompactClassificationDiscriminant` model object trained with `fitcdiscr`.

Predictor data to classify, specified as a matrix. Each row of the matrix represents an observation, and each column represents a predictor. The number of columns in `X` must equal the number of predictors in `mdl`.

Class labels, specified with the same data type as data in `mdl`. The number of elements of `Y` must equal the number of rows of `X`.

Observation weights, specified as a numeric vector of length `size(X,1)`.

## Output Arguments

collapse all

Weighted mean value of the classification margin, returned as a numeric scalar.

collapse all

### Edge

The edge is the weighted mean value of the classification margin. The weights are class prior probabilities. If you supply additional weights, those weights are normalized to sum to the prior probabilities in the respective classes, and are then used to compute the weighted average.

### Margin

The classification margin is the difference between the classification score for the true class and maximal classification score for the false classes.

The classification margin is a column vector with the same number of rows as in the matrix `X`. A high value of margin indicates a more reliable prediction than a low value.

### Score (discriminant analysis)

For discriminant analysis, the score of a classification is the posterior probability of the classification. For the definition of posterior probability in discriminant analysis, see Posterior Probability.

## Version History

Introduced in R2011b

expand all