THIS POST GETS COMPLETED GRADUALLY ! I DON’T HAVE ALL OF THE INFORMATION RIGHT NOW !

In this post, I go through the procedure of adding a Video DMA to your ZYNQ PL in Vivado environment and then to program it.

Camera : First you need the camera to be connected to your ZED Board. For this, there is a very nice reference design that you can use: ZEDBOARD OV7670

The source design is also provided so you can make sure that your camera is working and the data is being received by the PL correctly. If you don’t have the camera, then a nice choice is to use the Xilinx Video Test Pattern Generator. This is what I do for now.

Vivado: Then you can create your Vivado project and add the instances for the ZYNQ PS, the VDMA engine and also the link for reading data from the Camera, or the Xilinx video test pattern generator. If you have the logic for receiving the data from the camera, then you need to update it and add an AXI Stream Master plug to it so that later it can be easily connected to the VDMA engine. If you are wondering how you can do that, watch my educational videos on ZYNQ.

Here is a simple block diagram , showing the Video DMA connected to the ZYNQ PS. In this block diagram we have the TPG, the VDMA and the PS. and also two AXI Interconnects. Again if you are wondering what do all of these things mean, refer to my educational zynq training videos.

block_diagram

In this block diagram, I have intentionally deleted all of the clock and reset nets so that only the main AXI connections remain and the block diagram can be understood better.

As we see, we have the GP0 port of the ZYNQ device connected to the Slave ports of both of the AXI_VDMA and also V_TPG modules. Indeed the GP0 port, goes to processing_system_7_0_axi_periph AXI Interconnect, and from there, the two outputs go first to the AXI_VDMA and then to the V_TPG unit.

Then as we see, we have the video_out port of the V_TPG (which is an AXI Stream Master) connected to the S_AXIS_S2MM port of the AXI_VDMA which is an AXI Stream Slave. On the other side, for the ZYNQ PS, we have enabled the HP0 port and we have the M_AXI_S2MM port of the AXI_VDMA connected to HP0 port of the ZYNQ through an AXI interconnect called axi_mem_intercon. If you lilke to know more about conversion from AXI stream to AXI memory mapped, you can refer to my educational videos on ZYNQ.

Software: Now we need to develop drivers and software running on the ZYNQ side to manage the TPG and VDMA units. First we need to allocate memory in the DRAM connected to PS for the incoming image frames. So you need a call to malloc routine. If you are running every thing in bare metal mode (no Linux is running there) then the MMU is not active and the output of malloc is a physical address. If you are running Linux then the returned address is virtual and you need to obtain the equivalent physical. For this, inside our Linuc Kernel level driver, we run dma_alloc_coherent which returns us both the virtual and physical address for the allocated memory.The virtual address, we will use inside the driver at the Linux side. The physical address we will pass it through the GP0 port to the AXI VDMA unit so that it knows to which location in the physical DRAM memory (connected to HP0) it should write the incoming image frames.

Now we program the VDMA with the obtained physical address for image buffers and rest of the necessary configurations.

If you are running bare metal, you just perform the set of required write/reads to the registers of VDMA and TPG. Here is the address map of the design shown in the block diagram above:

address_mapAs we can see the base address for AXI_VDMA is 0x43000000 and for V_TPG is 0x43c00000. We use these two addresses to access the configuration registers of each these two units.

For complete description of each of the registers of these two units and their role, refer to AXI VDMA Product Guide, and AXI TPG Product Guide. (I am not sure if the links I put here are the latest version of this document. Search in Xilinx web site for that).

Now if we are running our software in bare metal then, we can just directly access the registers at specified address by the document. If we are running under linux, then we update our driver to first allocate memory regions for these two units inside the virtual address space and then we use the ioremap function to create the mapping between the physical address of the unit (shown above) and the virtual address.

Now we go to the details of how we configure the important registers for each of these two guys: VDMA and TPG.

(That is it for now, I will complete this post later).