You can use the Computer Vision Toolbox™ Support Package for Xilinx® Zynq®-Based Hardware to prototype your vision algorithms on Zynq-based hardware that is connected to real input and output video devices. Use the support package to:
Capture input or output video from the board and import it into Simulink® for algorithm development and verification.
Generate and deploy vision IP cores to the FPGA on the board. (requires HDL Coder™)
Generate and deploy C code to the ARM® processor on the board. You can route the video data from the FPGA into the ARM® processor to develop video processing algorithms targeted to the ARM processor. (requires Embedded Coder®)
View the output of your algorithm on an HDMI device.
Using this support package, you can capture live video from your Zynq device and import it into Simulink. The video source can be an HDMI video input to the board, an on-chip test pattern generator included with the reference design, or the output of your custom algorithm on the board. You can select the color space and resolution of the input frames. The capture resolution must match that of your input camera.
Once you have video frames in Simulink, you can:
Design frame-based video processing algorithms that operate on the live data input. Use blocks from the Computer Vision Toolbox libraries to quickly develop frame-based, floating-point algorithms.
Use the Frame To Pixels block from Vision HDL Toolbox™ to convert the input to a pixel stream. Design and verify pixel-streaming algorithms using other blocks from the Vision HDL Toolbox libraries.
The Computer Vision Toolbox Support Package for Xilinx Zynq-Based Hardware provides a reference design for prototyping video algorithms on the Zynq boards.
When you generate an HDL IP core for your pixel-streaming design
using HDL Workflow Advisor, the core is included in this reference
design as the FPGA user logic section. Points
the diagram show the options for capturing video into Simulink.
The FPGA user logic can also contain an optional interface to external frame buffer memory, which is not shown in the diagram.
The reference design on the Zynq device requires the same video resolution and color format for the entire data path. The resolution you select must match that of your camera input. The design you target to the user logic section of the FPGA must not modify the frame size or color space of the video stream.
The reference design does not support multipixel streaming.
By running all or part of your pixel-streaming design on the hardware, you speed up simulation of your video processing system and can verify its behavior on real hardware. To generate HDL code and deploy your design to the FPGA, you must have HDL Coder and the HDL Coder Support Package for Xilinx Zynq Platform, as well as Xilinx Vivado® and the Xilinx SDK.
After FPGA targeting, you can capture the live output frames from the FPGA user logic back to Simulink for further processing and analysis. You can also view the output on an HDMI output connected to your board. Using the generated hardware interface model, you can control the video capture options and read and write AXI-Lite ports on the FPGA user logic from Simulink during simulation.
The FPGA targeting step also generates a software interface model. This model supports software targeting to the Zynq hardware, including external mode, processor-in-the-loop, and full deployment. It provides data path control, and an interface to any AXI-Lite ports you defined on your FPGA targeted subsystem. From this model, you can generate ARM code that drives or responds to the AXI-Lite ports on the FPGA user logic. You can then deploy the code on the board to run along with the FPGA user logic. To deploy software to the ARM processor, you must have Embedded Coder and the Embedded Coder Support Package for Xilinx Zynq Platform.