Main Content

predict

Compute deep learning network output for inference by using a TensorFlow Lite model

Since R2022a

Description

example

Y = predict(net,X) returns the network output Y during inference given the input data X and the network net with a single input and a single output.

To use this function, you must install the Deep Learning Toolbox Interface for TensorFlow Lite support package.

[Y1,...,YN] = predict(net,X) returns the N outputs Y1, …, YN during inference for networks that have N outputs.

___ = predict(___,Name=Value) provides you options to control the data type (int8/uint8 vs single) of inputs and outputs when net is a quantized model. There is an additional option to enable or disable the execution of predict on Windows® platform for quantized models. All these name-value arguments are ignored if net is not a quantized model.

Tip

For prediction with SeriesNetwork and DAGNetwork objects, see predict.

Examples

collapse all

Suppose that your current working directory contains a TensorFlow™ Lite Model named mobilenet_v1_0.5_224.tflite.

Load the model by using the loadTFLite function. Inspect the object this function creates.

net = loadTFLiteModel('mobilenet_v1_0.5_224.tflite');
disp(net)
  TFLiteModel with properties:
            ModelName: 'mobilenet_v1_0.5_224.tflite'
            NumInputs: 1
           NumOutputs: 1
            InputSize: {[224 224 3]}
           OutputSize: {[1001 1]}
           NumThreads: 8
                 Mean: 127.5000
    StandardDeviation: 127.5000

Create a MATLAB® function that can perform inference using the object net. This function loads the Mobilenet-V1 model into a persistent network object. Then the function performs prediction by passing the network object to the predict function. Subsequent calls to this function reuse this the persistent object.

function out = tflite_predict(in)
persistent net;
if isempty(net)
    net = loadTFLiteModel('mobilenet_v1_0.5_224.tflite');
end
out = predict(net,in);
end

For an example that shows how to generate code for this function and deploy on Raspberry Pi® hardware, see Generate Code for TensorFlow Lite (TFLite) Model and Deploy on Raspberry Pi.

Input Arguments

collapse all

TFLiteModel object that represents the TensorFlow Lite model file.

Image or sequence input to the network, specified as a numeric array.

  • For image classification networks, the input must be of shape (H,W,C,N), where H is height, W is width, C is channel, and N is batch size.

  • For recurrent neural networks, the input must be of shape (D, N, S), where D is channel or feature dimension, N is batch size, and S is timestamp or sequence length.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: Y = predict(net,X,QuantizeInputs = true)

Whether to convert inputs to a quantized model to int8/uint8 values before performing inference computation, specified as one of the values in this table.

ValueDescription
false

This value is the default value.

The predict function directly uses the inputs that you provide.

true

The predict function quantizes inputs to the type expected by the InputType property of the model before performing inference computation.

This option only applies to quantized TFLite models and is ignored if the model is not quantized. For quantized TFLite models, you must set QuantizeInputs to true if input to the network is single or double.

For code generation workflows, you must specify this option as a constant.

Data Types: logical

Whether to convert outputs of a quantized model to single data type before returning them, specified as one of the values in this table.

ValueDescription
false

This value is the default value.

The predict function returns int8/uint8 values.

true

The predict function converts outputs to single data type before returning them.

This option only applies to quantized TFLite models and is ignored if the model is not quantized.

For code generation workflows, you must specify this option as a constant.

Data Types: logical

Whether to allow predict to run on Windows platform for quantized models, specified as one of the values in this table.

ValueDescription
false

This value is the default value.

Running the predict function with a quantized model produces an error on Windows platform. This default behavior allows you to detect existing issues with TFLite library v2.8.0 for quantized models on Windows platform.

true

Running the predict function with a quantized model on Windows platform bypasses the default error and executes to completion.

This option only applies to quantized TFLite models and is ignored if the model is not quantized. This option is also ignored on Linux® platform or in embedded deployment workflows.

For code generation workflows, you must specify this option as a constant.

Data Types: logical

Output Arguments

collapse all

Output data, specified as a numeric array.

When performing inference with quantized TensorFlow Lite models, the output data is normalized in one of these ways:

  • Signed 8-bit integer type outputs are normalized as output[i] = (prediction[i] + 128) / 256.0.

  • Unsigned 8-bit integer type outputs are normalized as output[i] = prediction[i] / 255.0.

Extended Capabilities

Version History

Introduced in R2022a