Profile Network to Determine Performance Bottlenecks
This example shows how to identify performance bottlenecks in a deep learning network on an FPGA by using the Profile
option of the predict
method.
Prerequisites
Xilinx® ZCU102 SoC development kit.
Deep Learning HDL Toolbox™ Support Package for Xilinx® FPGA and SoC
Deep Learning Toolbox™
Deep Learning HDL Toolbox™
Load the Pretrained Network
Load the pretrained digits network:
net = getDigitsNetwork; net.Layers
ans = 14×1 Layer array with layers: 1 'imageinput' Image Input 28×28×1 images with 'zerocenter' normalization 2 'conv_1' 2-D Convolution 8 3×3×1 convolutions with stride [1 1] and padding 'same' 3 'batchnorm_1' Batch Normalization Batch normalization with 8 channels 4 'relu_1' ReLU ReLU 5 'maxpool_1' 2-D Max Pooling 2×2 max pooling with stride [2 2] and padding [0 0 0 0] 6 'conv_2' 2-D Convolution 16 3×3×8 convolutions with stride [1 1] and padding 'same' 7 'batchnorm_2' Batch Normalization Batch normalization with 16 channels 8 'relu_2' ReLU ReLU 9 'maxpool_2' 2-D Max Pooling 2×2 max pooling with stride [2 2] and padding [0 0 0 0] 10 'conv_3' 2-D Convolution 32 3×3×16 convolutions with stride [1 1] and padding 'same' 11 'batchnorm_3' Batch Normalization Batch normalization with 32 channels 12 'relu_3' ReLU ReLU 13 'fc' Fully Connected 10 fully connected layer 14 'softmax' Softmax softmax
Define FPGA Board Interface
Define the target FPGA board programming interface by using the dlhdl.Target
object. Specify that the interface is for a Xilinx board with an Ethernet interface. To create the target object, enter:
hTarget = dlhdl.Target('Xilinx',Interface="Ethernet");
To use the JTAG interface, install Xilinx™ Vivado™ Design Suite 2023.1. To set the Xilinx Vivado tool path and use the JTAG interface, enter:
hdlsetuptoolpath('ToolName', 'Xilinx Vivado', 'ToolPath', 'C:\Xilinx\Vivado\2023.1\bin\vivado.bat'); hTarget = dlhdl.Target('Xilinx',Interface='JTAG');
Prepare Network for Deployment
Prepare the network for deployment by creating a dlhdl.Workflow
object. Specify the network and bitstream name. Ensure that the bitstream name matches the data type and FPGA board. In this example the target FPGA board is the Xilinx ZCU102 SOC board. The bitstream uses a single data type.
hW = dlhdl.Workflow(Network=net,Bitstream="zcu102_single",Target=hTarget);
To run the example in a Xilinx ZC706 board, enter:
hW = dlhdl.Workflow(Network=net,Bitstream='zc706_single',Target=hTarget);
Compile Network
Run the compile
method of the dlhdl.Workflow
object to compile the network and generate the instructions, weights, and biases for deployment.
dn = compile(hW);
### Compiling network for Deep Learning FPGA prototyping ... ### Targeting FPGA bitstream zcu102_single. ### An output layer called 'Output1_softmax' of type 'nnet.cnn.layer.RegressionOutputLayer' has been added to the provided network. This layer performs no operation during prediction and thus does not affect the output of the network. ### Optimizing network: Fused 'nnet.cnn.layer.BatchNormalizationLayer' into 'nnet.cnn.layer.Convolution2DLayer' ### Notice: The layer 'imageinput' of type 'ImageInputLayer' is split into an image input layer 'imageinput' and an addition layer 'imageinput_norm' for normalization on hardware. ### The network includes the following layers: 1 'imageinput' Image Input 28×28×1 images with 'zerocenter' normalization (SW Layer) 2 'conv_1' 2-D Convolution 8 3×3×1 convolutions with stride [1 1] and padding 'same' (HW Layer) 3 'relu_1' ReLU ReLU (HW Layer) 4 'maxpool_1' 2-D Max Pooling 2×2 max pooling with stride [2 2] and padding [0 0 0 0] (HW Layer) 5 'conv_2' 2-D Convolution 16 3×3×8 convolutions with stride [1 1] and padding 'same' (HW Layer) 6 'relu_2' ReLU ReLU (HW Layer) 7 'maxpool_2' 2-D Max Pooling 2×2 max pooling with stride [2 2] and padding [0 0 0 0] (HW Layer) 8 'conv_3' 2-D Convolution 32 3×3×16 convolutions with stride [1 1] and padding 'same' (HW Layer) 9 'relu_3' ReLU ReLU (HW Layer) 10 'fc' Fully Connected 10 fully connected layer (HW Layer) 11 'softmax' Softmax softmax (SW Layer) 12 'Output1_softmax' Regression Output mean-squared-error (SW Layer) ### Notice: The layer 'softmax' with type 'nnet.cnn.layer.SoftmaxLayer' is implemented in software. ### Notice: The layer 'Output1_softmax' with type 'nnet.cnn.layer.RegressionOutputLayer' is implemented in software. ### Compiling layer group: conv_1>>maxpool_2 ... ### Compiling layer group: conv_1>>maxpool_2 ... complete. ### Compiling layer group: conv_3>>relu_3 ... ### Compiling layer group: conv_3>>relu_3 ... complete. ### Compiling layer group: fc ... ### Compiling layer group: fc ... complete. ### Allocating external memory buffers: offset_name offset_address allocated_space _______________________ ______________ _________________ "InputDataOffset" "0x00000000" "368.0 kB" "OutputResultOffset" "0x0005c000" "4.0 kB" "SchedulerDataOffset" "0x0005d000" "220.0 kB" "SystemBufferOffset" "0x00094000" "76.0 kB" "InstructionDataOffset" "0x000a7000" "28.0 kB" "ConvWeightDataOffset" "0x000ae000" "28.0 kB" "FCWeightDataOffset" "0x000b5000" "76.0 kB" "EndOffset" "0x000c8000" "Total: 800.0 kB" ### Network compilation complete.
Program Bitstream onto FPGA and Download Network Weights
To deploy the network on the Xilinx ZCU102 SoC hardware, run the deploy
method of the dlhdl.Workflow
object. This function uses the output of the compile
function to program the FPGA board and download the network weights and biases. The deploy
function starts programming the FPGA device and displays progress messages, and the required time to deploy the network.
deploy(hW);
### Programming FPGA Bitstream using Ethernet... ### Attempting to connect to the hardware board at 192.168.1.101... ### Connection successful ### Programming FPGA device on Xilinx SoC hardware board at 192.168.1.101... ### Attempting to connect to the hardware board at 192.168.1.101... ### Connection successful ### Copying FPGA programming files to SD card... ### Setting FPGA bitstream and devicetree for boot... # Copying Bitstream zcu102_single.bit to /mnt/hdlcoder_rd # Set Bitstream to hdlcoder_rd/zcu102_single.bit # Copying Devicetree devicetree_dlhdl.dtb to /mnt/hdlcoder_rd # Set Devicetree to hdlcoder_rd/devicetree_dlhdl.dtb # Set up boot for Reference Design: 'AXI-Stream DDR Memory Access : 3-AXIM' ### Programming done. The system will now reboot for persistent changes to take effect. ### Rebooting Xilinx SoC at 192.168.1.101... ### Reboot may take several seconds... ### Attempting to connect to the hardware board at 192.168.1.101... ### Connection successful ### Programming the FPGA bitstream has been completed successfully. ### Loading weights to Conv Processor. ### Conv Weights loaded. Current time is 19-Jun-2024 10:31:25 ### Loading weights to FC Processor. ### FC Weights loaded. Current time is 19-Jun-2024 10:31:25
Test Network
Load the example image.
inputImg = imread('five_28x28.pgm'); inputImg = dlarray(single(inputImg), 'SSCB');
Classify the image on the FPGA by using the predict
method of the dlhdl.Workflow
object and display the results.
[~,speed] = predict(hW,inputImg,'Profile','on');
### Finished writing input activations. ### Running single input activation. Deep Learning Processor Profiler Performance Results LastFrameLatency(cycles) LastFrameLatency(seconds) FramesNum Total Latency Frames/s ------------- ------------- --------- --------- --------- Network 41754 0.00019 1 42563 5168.8 imageinput_norm 5539 0.00003 conv_1 6873 0.00003 maxpool_1 4768 0.00002 conv_2 4878 0.00002 maxpool_2 3004 0.00001 conv_3 7129 0.00003 fc 9531 0.00004 * The clock frequency of the DL processor is: 220MHz
Identify and Display the Bottleneck Layer
Remove the module- and network-level results contained in the NumFrames
, Total latency
, and Frames/s
columns from the results table. Retain only the network layer profiler results. After you identify the bottleneck layer, display the bottleneck layer index, running time, and information.
speed('Network',:) = []; speed = removevars(speed, {'NumFrames','Total Latency(cycles)','Frame/s'});
Sort the performance results in descending order.
speed = sortrows(speed,'Latency(cycles)','descend');
The first layer in this sorted table is the bottleneck layer. In this network the bottleneck layer is the fc
layer.
layerSpeed = speed(1,:); layerName = strip(layerSpeed.Properties.RowNames{1},'_'); for idx = 1:length(net.Layers) currLayer = net.Layers(idx); if strcmp(currLayer.Name, layerName) bottleNeckLayer = currLayer; break; end end
Display this information for the bottleneck layer:
Layer index
Percentage of time the layer runs
Layer information
dnnfpga.disp(['Bottleneck layer index is ', num2str(idx), '.']);
### Bottleneck layer index is 13.
percent = layerSpeed.("Latency(cycles)")/sum(speed.("Latency(cycles)")) * 100; dispStr = sprintf('It accounts for about %0.2f percent of the total running time.', percent); dnnfpga.disp(dispStr);
### It accounts for about 22.84 percent of the total running time.
dnnfpga.disp('Bottleneck layer information: ');
### Bottleneck layer information:
disp(currLayer);
FullyConnectedLayer with properties: Name: 'fc' Hyperparameters InputSize: 1568 OutputSize: 10 Learnable Parameters Weights: [10×1568 single] Bias: [10×1 single] Show all properties
See Also
dlhdl.Workflow
| dlhdl.Target
| compile
| deploy
| predict