如何在Petalinux中访问XDMA BAR0?
我使用运行Petalinux的Zynq处理器有一个块设计和硬件配置。此外,我的XDMA IP配置为存储器映射端点。我已经在PCI栏选项卡中配置了BAR0和BAR2。
我正在尝试为Petalinux编写一个简单的程序/应用程序,该程序在BAR0中设置了正确的配置值,以供主机读取。但是,我不确定BAR0在哪里,也不确定如何写入它。我如何在Petalinux中找到指向BAR0的指针?
I have a block design and hardware configuration with a Zynq processor running Petalinux. I furthermore have an XDMA IP configured as a memory-mapped endpoint. I have configured BAR0 and BAR2 in the PCI BARs tab.
I am trying to write a simple program/app for petalinux that sets the correct configuration values in BAR0 for the host to read. I am, however, not sure where BAR0 is located nor how to write to it. How do I find the pointer to BAR0 in Petalinux?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
导出.xsa文件时,您将在PS的寄存器空间中定义了BAR0地址。此外,您可以对设备树进行反复编译,以检查Xilinx-XDMA IP是否具有正确的条寄存器:
您应该找到这样的东西(我为清楚起见省略了一些字段):
重要信息在
范围内属性。 Linux在启动时间解析此信息,以了解BAR0在哪里。
我们还知道,驱动器将是
XDMA-HOST-3.00
驱动PCI低级通信( source ),检查链接是否已启动并处理MSI中断。从这里开始,您将有无数的机会,具体取决于您已连接到PCIE巴士的设备。例如,如果有NVME磁盘,则NVME的Xilinx驱动程序将使用PCIE驱动程序与端点进行交谈。
但是,如果您想访问特殊或自定义终点的bar寄存器,则可以使用
pci_uio_generic
驱动程序,该驱动程序在通用用户空间IO设备上映射PCIE资源。有一些文档此处或在 dpdk 图书馆。
但是,基本想法是,一旦将驱动程序绑定到PCIE设备,您将在
/sys/class/class/uio/uio/uio< dev_num>//gt;//gt;//gt;/gt;/gt;/gt;/gt;/gt;/cape> uio
设备下找到一个新的uio
设备的基本想法。设备/Resource0
,如果您
MMAP
此类资源,您将有一个直接指向PCIE设备BAR0的虚拟内存地址。When you export the .xsa file you will have the BAR0 address defined in the register space of your PS. Furthermore you can decompile the device tree to check that the xilinx-xdma IP has the correct BAR registers:
There you should find something like this (I have omitted some fields for clarity):
The important info is in the
ranges
property. This info is parsed at boot time by linux to know where the BAR0 is.We also know that the driver will be
xdma-host-3.00
that drives the PCI low level communication (source), checks if the link is up and handles the MSI interrupts.From here you have countless opportunities depending on which device you have attached to the PCIe bus. If there is an NVMe disk, for instance, the Xilinx drivers for the NVMe will use the PCIe drivers to talk to the end-point.
However, if you want to have access to the BAR registers for a special or custom end-point you can use the
pci_uio_generic
driver, which maps the PCIe resources on a generic user space IO device.There is some documentation here or in the DPDK library.
The basic idea, however, is that once you have bound the driver to the PCIe device, you'll find a new
uio
device under/sys/class/uio/uio<dev_num>/device/resource0
And if you
mmap
such resource, you'll have a virtual memory address pointing directly into the BAR0 of your PCIe device.