Build and Run an Executable on NVIDIA Hardware

The GPU Coder™ Support Package for NVIDIA® GPUs uses the GPU Coder product to generate CUDA® code (kernels) from the MATLAB® algorithm. These kernels run on any CUDA enabled GPU platform. The support package automates the deployment of the generated CUDA code on GPU hardware platforms such as Jetson or DRIVE

Learning Objectives

In this tutorial, you learn how to:

  • Prepare your MATLAB code for CUDA code generation by using the kernelfun pragma.

  • Connect to the NVIDIA target board.

  • Generate and deploy CUDA executable on the target board.

  • Run the executable on the board and verify the results.

Tutorial Prerequisites

Target Board Requirements

  • NVIDIA DRIVE or Jetson embedded platform.

  • Ethernet crossover cable to connect the target board and host PC (if the target board cannot be connected to a local network).

  • NVIDIA CUDA toolkit installed on the board.

  • Environment variables on the target for the compilers and libraries. For information on the supported versions of the compilers and libraries and their setup, see Install and Setup Prerequisites for NVIDIA Boards.

Development Host Requirements

  • GPU Coder for code generation. For an overview and tutorials, see the Getting Started with GPU Coder (GPU Coder) page

  • NVIDIA CUDA toolkit on the host.

  • Environment variables on the host for the compilers and libraries. For information on the supported versions of the compilers and libraries, see Third-party Products (GPU Coder). For setting up the environment variables, see Environment Variables (GPU Coder).

Example: Vector Addition

This tutorial uses a simple vector addition example to demonstrate the build and deployment workflow on NVIDIA GPUs. Create a MATLAB function myAdd.m that acts as the entry-point for code generation. Alternatively, use the files in the Getting Started with the GPU Coder Support Package for NVIDIA GPUs example for this tutorial. The easiest way to create CUDA code for this function is to place the coder.gpu.kernelfun pragma in the function. When the GPU Coder encounters kernelfun pragma, it attempts to parallelize all the computation within this function and then maps it to the GPU.

function out = myAdd(inp1,inp2) %#codegen
coder.gpu.kernelfun();
out = inp1 + inp2;
end

Create a Live Hardware Connection Object

The support package software uses an SSH connection over TCP/IP to execute commands while building and running the generated CUDA code on the DRIVE or Jetson platforms. Connect the target platform to the same network as the host computer. Alternatively, use an Ethernet crossover cable to connect the board directly to the host computer. Refer to the NVIDIA documentation on how to set up and configure your board.

To communicate with the NVIDIA hardware, you must create a live hardware connection object by using the jetson or drive function. To create a live hardware connection object, provide the host name or IP address, user name, and password of the target board. For example to create live object for Jetson hardware:

hwobj = jetson('192.168.1.15','ubuntu','ubuntu');

The software performs a check of the hardware, compiler tools and libraries, IO server installation, and gathers peripheral information on target. This information is displayed in the command window.

Checking for CUDA availability on the Target...
Checking for NVCC in the target system path...
Checking for CUDNN library availability on the Target...
Checking for TensorRT library availability on the Target...
Checking for Prerequisite libraries is now complete.
Fetching hardware details...
Fetching hardware details is now complete. Displaying details.
 Board name        : NVIDIA Jetson TX2
 CUDA Version      : 9.0
 cuDNN Version     : 7.0
 TensorRT Version  : 3.0
 Available Webcams : UVC Camera (046d:0809)
 Available GPUs    : NVIDIA Tegra X2

Alternatively, to create live object for DRIVE hardware:

hwobj = drive('92.168.1.16','nvidia','nvidia');

Note

If there is a connection failure, a diagnostics error message is reported on the MATLAB command window. If the connection has failed, the most likely cause is incorrect IP address or host name.

Generate CUDA Executable Using GPU Coder

To generate a CUDA executable that can be deployed to a NVIDIA target, create a custom main wrapper file main.cu, main.h that calls the entry-point function in the generated code. The main file passes a vector containing the first 100 natural numbers to the entry point function and writes the results to a myAdd.bin binary file.

//main.cu
// Include Files
#include "myAdd.h"
#include "main.h"
#include "myAdd_terminate.h"
#include "myAdd_initialize.h"
#include <stdio.h>

// Function Declarations
static void argInit_1x100_real_T(real_T result[100]);
static void main_myAdd();

// Function Definitions
static void argInit_1x100_real_T(real_T result[100])
{
  int32_T idx1;

  // Initialize each element.
  for (idx1 = 0; idx1 < 100; idx1++) {
    result[idx1] = (real_T) idx1;
  }
}

void writeToFile(real_T result[100])
{
    FILE *fid = NULL;
    fid = fopen("myAdd.bin", "wb");
    fwrite(result, sizeof(real_T), 100, fid);
    fclose(fid);
}

static void main_myAdd()
{
  real_T out[100];
  real_T b[100];
  real_T c[100];

  argInit_1x100_real_T(b);
  argInit_1x100_real_T(c);
  
  myAdd(b, c, out);
  writeToFile(out);  // Write the output to a binary file
}

// Main routine
int32_T main(int32_T, const char * const [])
{
  // Initialize the application.
  myAdd_initialize();

  // Invoke the entry-point functions.
  main_myAdd();

  // Terminate the application.
  myAdd_terminate();
  return 0;
}
//main.h
#ifndef MAIN_H
#define MAIN_H

// Include Files
#include <stddef.h>
#include <stdlib.h>
#include "rtwtypes.h"
#include "myAdd_types.h"

// Function Declarations
extern int32_T main(int32_T argc, const char * const argv[]);

#endif

Create a GPU code configuration object for generating an executable. Use the coder.hardware function to create a configuration object for the DRIVE or Jetson platform and assign it to the Hardware property of the code configuration object cfg. Use the BuildDir property to specify the folder for performing remote build process on the target. If the specified build folder does not exist on the target, then the software creates a folder with the given name. If no value is assigned to cfg.Hardware.BuildDir, the remote build process happens in the last specified build folder. If there is no stored build folder value, the build process takes place in the home folder.

cfg = coder.gpuConfig('exe');
cfg.Hardware = coder.hardware('NVIDIA Jetson');
cfg.Hardware.BuildDir = '~/remoteBuildDir';
cfg.CustomSource  = fullfile('main.cu');

To generate CUDA code, use the codegen command and pass the GPU code configuration object along with the size of the inputs for and myAdd entry-point function. After the code generation takes place on the host, the generated files are copied over and built on the target.

codegen('-config ',cfg,'myAdd','-args',{1:100,1:100});

Run the Executable and Verify the Results

To run the executable on the target hardware, use the runApplication() method of the hardware object. In the MATLAB command window, enter:

pid = runApplication(hwobj,'myAdd');
### Launching the executable on the target...
Executable launched successfully with process ID 26432.
Displaying the simple runtime log for the executable...

Copy the output bin file myAdd.bin to the MATLAB environment on the host and compare the computed results with the results from MATLAB.

outputFile = [hwobj.workspaceDir '/myAdd.bin']
getFile(hwobj,outputFile);

% Simulation result from the MATLAB.
simOut = myAdd(0:99,0:99);

% Read the copied result binary file from target in MATLAB.
fId  = fopen('myAdd.bin','r');
tOut = fread(fId,'double');
diff = simOut - tOut';
fprintf('Maximum deviation : %f\n', max(diff(:)));
Maximum deviation between MATLAB Simulation output and GPU coder output on Target is: 0.000000

See Also

| | | | | | | | |

Related Examples

More About