Main Content

precisionMetric

Deep learning precision metric

Since R2023b

    Description

    Use a PrecisionMetric object to track the precision when you train or test a deep neural network.

    To specify which metrics to use during training, specify the Metrics option of the trainingOptions function. You can use this option only when you train a network using the trainnet function.

    To plot the metrics during training, in the training options, specify Plots as "training-progress". If you specify the ValidationData training option, then the software also plots and records the metric values for the validation data. To output the metric values to the Command Window during training, in the training options, set Verbose to true.

    You can also access the metrics after training using the TrainingHistory and ValidationHistory fields from the second output of the trainnet function.

    To specify which metrics to use when you test a neural network, use the metrics argument of the testnet function.

    Creation

    Description

    metric = precisionMetric creates a PrecisionMetric object. You can then specify metric as the Metrics name-value argument in the trainingOptions function or the metrics argument of the testnet function. With no additional options specified, this syntax is equivalent to specifying the metric as "precision".

    This metric is valid only for classification tasks.

    example

    metric = precisionMetric(Name=Value) sets the Name, NetworkOutput, AverageType, and ClassificationMode properties using name-value arguments.

    Properties

    expand all

    Metric name, specified as a string scalar or character vector. The metric name appears in the training plot, the verbose output, the training information that you can access as the second output of the trainnet function, and table output of the testnet function.

    Data Types: char | string

    This property is read-only.

    Name of the layer to apply the metric to, specified as [], a string scalar, or a character vector. When the value is [], the software passes all of the network outputs to the metric.

    Note

    You can apply the built-in metric to only a single output. If you have a network with multiple outputs, then you must specify the NetworkOutput name-value argument. To apply built-in metrics to multiple outputs, you must create a metric object for each output.

    Data Types: char | string

    This property is read-only.

    Type of averaging to use to compute the metric, specified as one of these values:

    • "micro" — Calculate the metric across all classes.

    • "macro" — Calculate the metric for each class and return the average.

    • "weighted" — Calculate the metric for each class and return the weighted average. The weight for a class is the proportion of observations from that class.

    For more information, see Averaging Type.

    Data Types: char | string

    This property is read-only.

    Type of classification task, specified as one of these values:

    • "single-label" — Each observation is exclusively assigned one class label (single-label classification).

    • "multilabel" — Each observation can be assigned more than one independent class label (multilabel classification). The software uses a softmax threshold of 0.5 to assign class labels.

    To select the classification mode for binary classification, consider the final layer of the network:

    • If the final layer has an output size of one, such as with a sigmoid layer, use "multilabel".

    • If the final layer has an output size of two, such as with a softmax layer, use "single-label".

    Note

    This metric is not supported when the ClassificationMode is set to "single-label" and the network output has a channel dimension of size 1. For example, if you have a single class and the output is a sigmoidLayer object (binary-sigmoid task).

    Data Types: char | string

    This property is read-only.

    Flag to maximize metric, specified as 1 (true) if the optimal value for the metric occurs when the metric is maximized.

    For this metric, the Maximize value is always set to 1 (true).

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical

    Object Functions

    trainingOptionsOptions for training deep learning neural network
    trainnetTrain deep learning neural network

    Examples

    collapse all

    Plot and record the training and validation precision when you train a deep neural network.

    Unzip the digit sample data and create an image datastore. The imageDatastore function automatically labels the images based on folder names.

    unzip("DigitsData.zip")
    imds = imageDatastore("DigitsData", ...
        IncludeSubfolders=true, ...
        LabelSource="foldernames");

    The datastore contains 10,000 synthetic images of digits from 0 to 9. Each image in the data set has a size of 28-by-28-by-1 pixels. You can train a deep learning network to classify the digit in the image.

    Use a subset of the data as the validation set.

    numTrainingFiles = 750;
    [imdsTrain,imdsVal] = splitEachLabel(imds,numTrainingFiles,"randomize");

    Create an image classification network.

    layers = [ ...
        imageInputLayer([28 28 1])
        convolution2dLayer(5,20)
        reluLayer
        maxPooling2dLayer(2,Stride=2)
        fullyConnectedLayer(10)
        softmaxLayer];

    Create a PrecisionMetric object and set AverageType to "macro". You can use this object to record and plot the training and validation precision.

    metric = precisionMetric(AverageType="macro")
    metric = 
      PrecisionMetric with properties:
    
                      Name: "Precision"
               AverageType: "macro"
        ClassificationMode: "single-label"
             NetworkOutput: []
                  Maximize: 1
    
    

    Specify the precision metric in the training options. To plot the precision during training, set Plots to "training-progress". To output the values during training, set Verbose to true.

    options = trainingOptions("adam", ...
        MaxEpochs=5, ...
        Metrics=metric, ...
        ValidationData=imdsVal, ...
        ValidationFrequency=50, ...
        Plots="training-progress", ...
        Verbose=true);

    Train the network using the trainnet function.

    [net,info] = trainnet(imdsTrain,layers,"crossentropy",options);
        Iteration    Epoch    TimeElapsed    LearnRate    TrainingLoss    ValidationLoss    TrainingPrecision    ValidationPrecision
        _________    _____    ___________    _________    ____________    ______________    _________________    ___________________
                0        0       00:00:06        0.001                            13.488                                     0.14828
                1        1       00:00:06        0.001          13.974                                  0.035                       
               50        1       00:00:26        0.001          2.7424            2.7448              0.70805                 0.6987
              100        2       00:00:33        0.001          1.2965            1.2235              0.81573                0.81146
              150        3       00:00:40        0.001         0.64661           0.80412              0.88573                0.86434
              200        4       00:00:47        0.001         0.18627           0.53273              0.94937                0.89793
              250        5       00:00:55        0.001         0.16763           0.49371              0.94726                0.89981
              290        5       00:01:00        0.001         0.25976           0.39347              0.95243                0.91652
    Training stopped: Max epochs completed
    

    Access the loss and precision values for the validation data.

    info.ValidationHistory
    ans=7×3 table
        Iteration     Loss      Precision
        _________    _______    _________
    
             0        13.488     0.14828 
            50        2.7448      0.6987 
           100        1.2235     0.81146 
           150       0.80412     0.86434 
           200       0.53273     0.89793 
           250       0.49371     0.89981 
           290       0.39347     0.91652 
    
    

    More About

    expand all

    Version History

    Introduced in R2023b