Main Content

Prerequisites for Deep Learning with TensorFlow Lite Models

MathWorks Products

To perform inference with TensorFlow™ Lite models in MATLAB® execution, or by using MATLAB Function blocks in Simulink® models, you must install:

In addition, to generate code for TensorFlow Lite models, you must also install MATLAB Coder™.

Third-Party Hardware and Software

Deployment Platform

MATLAB host computer or ARM® processor

Software Libraries

TensorFlow Lite version 2.4.1 on host computer or target. For information on building the library, see this post in MATLAB Answers™: https://www.mathworks.com/matlabcentral/answers/1631265.

Supported models include:

  • Classification and object detection networks

  • Recurrent neural networks

  • Multi-output networks

Multi-input networks are not supported.

TensorFlow Lite models are forward and backward compatible. So, if your model was created using a different version of the library but contains layers that are available in version 2.4.1, you can still generate code and deploy your model.

Operating System Support

Windows® and Linux® only. CentOS and Red Hat® Linux distributions are not supported.

Supported Compilers

MATLAB Coder locates and uses a supported installed compiler.

For generating MEX on Windows platform, use one of these compilers:

  • Microsoft® Visual C++® 2017

  • Microsoft Visual C++ 2019

  • Microsoft Visual C++ 2022

  • Intel® oneAPI 2021 for C++ with Microsoft Visual Studio® 2017

  • Intel oneAPI 2021 for C++ with Microsoft Visual Studio 2019

For the list of supported compilers on Linux platform, see Supported and Compatible Compilers on the MathWorks® website.

You can use mex -setup to change the default compiler. See Change Default Compiler.

The C++ compiler must support C++11.

Environment Variables

MATLAB Coder uses environment variables to locate the libraries required to generate code for deep learning networks.

For deployment on the MATLAB host computer set these environment variables on the host:

  • TFLITE_PATH: Location of the TensorFlow Lite library directory.

  • LD_LIBRARY_PATH: Location of the run-time shared library. For example, TFLITE_PATH/lib/tensorflow/lite. (For Linux platform.)

  • PATH: Location of the run-time shared library. For example, TFLITE_PATH\lib\tensorflow\lite. (For Windows platform.)

For deployment on ARM processor, set these environment variables on the target hardware board:

  • TFLITE_PATH: Location of the TensorFlow Lite library directory.

  • LD_LIBRARY_PATH: Location of the run-time shared library. For example, TFLITE_PATH/lib/tensorflow/lite.

  • TFLITE_MODEL_PATH: :Location of the TensorFlow Lite model that you intend to deploy.

See Also

| |

Related Topics

External Websites