Prototype Vision Algorithms on Zynq-Based Hardware
You can use the SoC Blockset™ Support Package for AMD FPGA and SoC Devices to prototype your vision algorithms on Zynq®-based hardware that is connected to real input and output video devices. Use the support package to:
Capture input or output video from the board and import it into Simulink® for algorithm development and verification.
Generate and deploy vision IP cores to the FPGA on the board (requires HDL Coder™).
Generate and deploy C code to the ARM® processor on the board. You can route the video data from the FPGA into the ARM® processor to develop video processing algorithms targeted to the ARM processor. (requires Embedded Coder®)
View the output of your algorithm on an HDMI device.
Video Capture
You can capture live video from your Zynq device and import it into Simulink. The video source can be an HDMI, MIPI, or USB camera input to the board or an on-chip test pattern generator. You can select the color space and resolution of the input frames. The capture resolution must match that of your input camera.
Once you have video frames in Simulink, you can:
Design frame-based video processing algorithms that operate on the live data input. Use blocks from the Computer Vision Toolbox™ libraries to quickly develop frame-based, floating-point algorithms.
Use the Frame To Pixels block from Vision HDL Toolbox™ to convert the input to a pixel stream. Design and verify pixel-streaming algorithms using other blocks from the Vision HDL Toolbox libraries.
Model memory interfaces by using SoC Blockset blocks, see Modeling External Memory.
Reference Design
The SoC Blockset Support Package for AMD FPGA and SoC Devices provides reference designs for prototyping video algorithms on the Zynq boards.
For example, when you generate an HDL IP core for a pixel-streaming design with HDMI input and
output video using HDL Workflow Advisor, the core is included in this reference design as the
FPGA user logic section. Points A
and B
in the diagram show the options for capturing HDMI video into Simulink.
Note
The reference design on the Zynq device requires the same video resolution and color format for the entire data path. The resolution you select must match that of your camera input. The design you target to the user logic section of the FPGA must not modify the frame size or color space of the video stream.
The support package also provides reference designs for MIPI and USB camera interfaces and reference designs that support deep learning applications.
Deployment and Generated Models
By running all or part of your pixel-streaming design on the hardware, you speed up simulation of your video processing system and can verify its behavior on real hardware. To generate HDL code and deploy your design to the FPGA, you must have HDL Coder and the HDL Coder Support Package for Xilinx® FPGA and SoC Devices, as well as Xilinx Vivado® and the Xilinx SDK.
After FPGA targeting, you can capture the live output frames from the FPGA user logic back to Simulink for further processing and analysis. You can also view the output on an HDMI output connected to your board. Using the generated hardware interface model, you can control the video capture options and read and write AXI-Lite ports on the FPGA user logic from Simulink during simulation.
The FPGA targeting step also generates a software interface model. This model supports software targeting to the Zynq hardware, including external mode, processor-in-the-loop, and full deployment. It provides data path control, and an interface to any AXI-Lite ports you defined on your FPGA targeted subsystem. From this model, you can generate ARM code that drives or responds to the AXI-Lite ports on the FPGA user logic. You can then deploy the code on the board to run along with the FPGA user logic. To deploy software to the ARM processor, you must have Embedded Coder and the Embedded Coder Support Package for AMD SoC Devices.
Related Topics
- AMD FPGA and SoC Devices (SoC Blockset)
- Modeling External Memory