Main Content

coder.CuDNNConfig

Parameters to configure deep learning code generation with the CUDA Deep Neural Network library

Description

The coder.CuDNNConfig object contains NVIDIA® cuDNN specific parameters that codegen uses for generating CUDA® code for deep neural networks.

To use a coder.CuDNNConfig object for code generation, assign it to the DeepLearningConfig property of a coder.gpuConfig object that you pass to codegen.

Creation

Create a cuDNN configuration object by using the coder.DeepLearningConfig function with target library set as 'cudnn'.

Properties

expand all

Enable or disable auto tuning feature. Enabling auto tuning allows the cuDNN library to find the fastest convolution algorithms. This increases performance for larger networks such as SegNet and ResNet

Specify the precision of the inference computations in supported layers. When performing inference in 32-bit floats, use 'fp32'. For 8-bit integer, use 'int8'. Default value is 'fp32'.

INT8 precision requires a CUDA GPU with minimum compute capability of 6.1. Compute capability of 6.2 does not support INT8 precision. Use the ComputeCapability property of the GpuConfig object to set the appropriate compute capability value.

Note

When performing inference in INT8 precision using cuDNN version 8.1.0, issues in the NVIDIA library may cause significant degradation in performance.

Location of the MAT-file containing the calibration data. Default value is ''. This option is applicable only when DataType is set to 'int8'.

When performing quantization of a deep convolutional neural network, the calibrate (Deep Learning Toolbox) function exercises the network and collects the dynamic ranges of the weights and biases in the convolution and fully connected layers of the network and the dynamic ranges of the activations in all layers of the network. To generate code for the optimized network, save the results from the calibrate function to a MAT-file and specify the location of this MAT-file to the code generator using this property. For more information, see Generate INT8 Code for Deep Learning Networks.

A read-only value that specifies the name of the target library.

Examples

collapse all

Create an entry-point function resnet_predict that uses the imagePretrainedNetwork function to load the dlnetwork object that contains the ResNet-50 network. For more information, see Code Generation for dlarray

function out = resnet_predict(in)

dlIn = dlarray(in, 'SSCB');
persistent dlnet;
if isempty(dlnet)
    dlnet = imagePretrainedNetwork('resnet50');
end

dlOut = predict(dlnet, dlIn);
out = extractdata(dlOut);

Create a coder.gpuConfig configuration object for MEX code generation.

cfg = coder.gpuConfig('mex');

Set the target language to C++.

cfg.TargetLang = 'C++';

Create a coder.CuDNNConfig deep learning configuration object and assign it to the DeepLearningConfig property of the cfg configuration object.

cfg.DeepLearningConfig = coder.DeepLearningConfig(TargetLib = 'cudnn');

Use the -config option of the codegen function to pass the cfg configuration object. The codegen function must determine the size, class, and complexity of MATLAB® function inputs. Use the -args option to specify the size of the input to the entry-point function.

codegen -args {ones(224,224,3,'single')} -config cfg resnet_predict;

The codegen command places all the generated files in the codegen folder. The folder contains the CUDA code for the entry-point function resnet_predict.cu, header, and source files containing the C++ class definitions for the convoluted neural network (CNN), weight, and bias files.

Version History

Introduced in R2018b