Main Content

trainPatchCoreAnomalyDetector

Train PatchCore anomaly detection network

Since R2023a

    Description

    example

    detector = trainPatchCoreAnomalyDetector(normalData,detectorIn) trains the input PatchCore anomaly detection network detectorIn. The training data consists of normal images in normalData.

    Note

    This functionality requires Deep Learning Toolbox™ and the Computer Vision Toolbox™ Automated Visual Inspection Library. You can install the Computer Vision Toolbox Automated Visual Inspection Library from Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons.

    Note

    It is recommended that you also have Parallel Computing Toolbox™ to use with a CUDA®-enabled NVIDIA® GPU. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).

    detector = trainPatchCoreAnomalyDetector(___,Name=Value) specifies options that control aspects of network creation and training as one or more name-value arguments, in addition to all input arguments from the previous syntax.

    Examples

    collapse all

    Load a data set that consists of images of digits from 0 to 9. Consider images of the digit 8 to be normal, and all other digits to be anomalous.

    dataDir = fullfile(toolboxdir("vision"),"visiondata","digits","synthetic");
    dsNormal  = imageDatastore(fullfile(dataDir,"8"));

    Create a patchCoreAnomalyDetector object.

    untrainedDetector = patchCoreAnomalyDetector(Backbone="resnet18");

    Train the anomaly detector.

    detector = trainPatchCoreAnomalyDetector(dsNormal,untrainedDetector);
    Computing Input Normalization Statistics.
    Training PatchCore Model.
    -------------------------
    
    Step 1: Computing embeddings for each minibatch
    Done creating uncompressed train embeddings.
    Step 2: Compress train embeddings at 0.1 factor to create a coreset...
    
    Computing coreset samples
    Done compressing train embeddings (coreset).
    Done training PatchCore model.
    

    Set the anomaly threshold of the detector using a calibration data set.

    calDir = fullfile(toolboxdir("vision"),"visiondata","digits","handwritten");
    dsCal = imageDatastore(calDir,IncludeSubfolders=true,LabelSource="foldernames");
    gtLabels = dsCal.Labels;
    anomalyLabels = setdiff(string(0:9),"8");
    scores = predict(detector,dsCal);
    [T,roc] = anomalyThreshold(gtLabels,scores,anomalyLabels)
    T = single
        1.4920
    
    roc = 
      rocmetrics with properties:
    
        Metrics: [121x4 table]
            AUC: 0.8573
    
    
    
    
    detector.Threshold = T;

    Input Arguments

    collapse all

    PatchCore anomaly detector to train, specified as a patchCoreAnomalyDetector object.

    Training data, specified as a datastore. The training data consists of samples of normal images. Do not include anomaly images in the training data.

    Name-Value Arguments

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Example: trainPatchCoreAnomalyDetector(normalData,detectorIn,CompressionRatio=0.1) specifies that the detector must use only 10% of the features extracted from training images.

    Compression ratio, specified as a number in the range [0, 1]. This value indicates the decimal amount of compression used on the features extracted from training images to construct a memory bank. For example, a compression ratio of 0.1 specifies that the detector must use only 10% of the features. This ratio is a trade-off between memory and accuracy. A smaller CompressionRatio value increases compression, but decreases accuracy.

    Size of the mini-batch to use for each training iteration, specified as a positive integer. A mini-batch is a subset of the training set that the training function uses to evaluate the gradient of the loss function and update the weights.

    Hardware resource for running the neural network, specified as one of these options:

    • "auto" — Use a GPU if one is available. Otherwise, use the CPU.

    • "gpu" — Use the GPU. To use a GPU, you must have Parallel Computing Toolbox and a CUDA-enabled NVIDIA GPU. If a suitable GPU is not available, the function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).

    • "cpu" — Use the CPU.

    For more information on when to use the different execution environments, see Scale Up Deep Learning in Parallel, on GPUs, and in the Cloud (Deep Learning Toolbox).

    Reset input layer normalization using training images, specified as a logical 1 (true) or 0 (false):

    • 1 (true) — Reset the input layer normalization statistics using the normalData data set, and recalculate them at training time.

    • 0 (false) — Recalculate the input layer normalization statistics at training time.

    Subsampling method used to reduce redundancy in the feature bank that occurs during training, specified as one of these options:

    • "greedycoreset" — Greedy coreset subsampling method.

    • "random" — Random subsampling method.

    The greedy coreset subsampling method has higher accuracy at the expense of slower run time. The random subsampling method can increase run speed at the expense of a decrease in comparative accuracy. When using the random subsampling method, increasing the number of training images can improve accuracy, but also increases the model size and inference time.

    Display training progress information in the command window, specified as a logical 1 (true) or 0 (false).

    Output Arguments

    collapse all

    Trained PatchCore anomaly detector, returned as a patchCoreAnomalyDetector object.

    Tips

    • For a given training image size and number of training images, if the peak memory usage for creating a memory bank exceeds the available memory, PatchCore outputs a warning. To decrease memory usage, try reducing image resolution, using fewer training images, or enhancing GPU performance (in this order to balance the ease of implementation with the target outcome of lowering memory usage).

    Version History

    Introduced in R2023a

    expand all