使用DPDK添加数据包延迟
我想模拟使用DPDK发送的数据包中的延迟。
最初,我添加了Usleep(10),但后来我意识到使用睡眠可能会阻碍我的交通生成器的性能。
usleep(10);
rte_eth_tx_burst(m_repid, queue_id, tx_pkts, nb_pkts);
因此,我尝试使用一种投票机制。这样的事情:
inline void add_latency(float lat) {
//usleep(lat);
float start = now_sec();
float elapsed;
do {
elapsed = now_sec() - start;
} while(elapsed < (lat/1000));
}
但是数据包没有发送。 TX_PKTS:0
我在做什么错?
编辑:
DPDK版本:DPDK 22.03
固件:
# dmidecode -s bios-version
2.0.19
NIC:
0000:01:00.0 'I350 Gigabit Network Connection' if=em1 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic *Active*
0000:01:00.3 'I350 Gigabit Network Connection' if=em4 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic
I want to simulate latency in packets I send using DPDK.
Initially I added usleep(10), and it worked but later I realized using sleep might hinder performance of my traffic generator.
usleep(10);
rte_eth_tx_burst(m_repid, queue_id, tx_pkts, nb_pkts);
So, I tried using a polling mechanism. Something like this:
inline void add_latency(float lat) {
//usleep(lat);
float start = now_sec();
float elapsed;
do {
elapsed = now_sec() - start;
} while(elapsed < (lat/1000));
}
But the packets are not getting send.
tx_pkts: 0
What am I doing wrong?
EDIT:
DPDK version: DPDK 22.03
Firmware:
# dmidecode -s bios-version
2.0.19
NIC:
0000:01:00.0 'I350 Gigabit Network Connection' if=em1 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic *Active*
0000:01:00.3 'I350 Gigabit Network Connection' if=em4 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
对于Intel NIC I350和Mellanox MT27800,根据
DPDK 22.03
不支持延迟数据包传输
的HW卸载。延迟数据包传输是一个硬件功能,它允许在定义的未来时间戳上传输数据包。例如,如果需要从DMA的时间将数据包发送10微秒至NIC Buffer,则可以将TX描述符以10US
作为TX Timestamp更新。可以通过在发送描述符中的时间戳来报告HW上的
TX Timestamp
,可以实现类似的(近似)行为。捕获的时间戳将是将数据包的第一个字节发送到电线上的时间。通过将DPDK主内存到NIC SRAM的数据包DMA所需的时间近似,可以实现延迟数据包发送
。但是有某些警告,例如
注意:
ConnectX-7
使用PMD ARGTX_PP
可以使用可直接在描述符中指定的时间戳上安排流量。数据包大小没有澄清该问题,因此使用dpdk
在数据包中模拟的框架间差距延迟
,该假设是在64B的电线上,用固定的固定默认IFG。建议:
选项1:如果是64B,最好的方法是为TX爆发创建一系列暂停数据包。选择基于HW或SW Timestamp的时间间隔,以将阵列索引与要发送的实际数据包交换。
选项-2:允许Synce数据包同步服务器 - 信赖器之间的时间戳。使用频段信息进行动态睡眠(DMA和电线传递的成本大致成本),以偏向期望的结果。
请注意,如果意图是检查DUT的延迟,则指定整个方法,因为代码段是不正确的。请参阅dpdk synce示例或dpdk pktgen延迟以获得更多清晰度。
For both Intel NIC i350 and Mellanox MT27800 as per
DPDK 22.03
does not support HW offload fordelayed packet transmission
. Delayed packet transmission is a hardware feature which allows transmission of a packet at a defined future timestamp. For example if one needs to send a packet 10 microseconds from time of DMA to NIC buffer, the TX descriptor can be updated with the10us
as TX timestamp.A similar (approximate) behaviour can be achieved by enabling
TX timestamp
on HW by Reporting back the timestamp in the transmit descriptor. The timestamp captured will be the time at which the first byte of the packet is sent out on the wire. With an approximation of time required for DMA of the packet from DPDK Main memory to NIC SRAM one can achieve thedelayed packet transmit
.But there are certain caveats for the same, such as
note:
ConnectX-7
using PMD argtx_pp
can be used to capability to schedule traffic directly on timestamp specified in descriptor is provided.packet size, Inter Frame Gap delay
forsimulate latency in packets I send using DPDK
, the assumption is made it on the wire for 64B with fixed default IFG.Suggestion:
Option-1: if it is 64B best approach is to create an array of pause packets for TX burst. Select the time intervals based on HW or SW timestamp to swap the array index with the actual packet intended to be sent.
Option-2: allow synce packets to synchronize the time stamps between server-client. Using out of band information do dynamic sleep (with approximate cost for DMA and wire transfer) to skew to desired results.
Please note if the intention is check the latency on DUT the whole approach is specified as code snippet is not correct. Refer DPDK synce example or DPDK pktgen latency for more clarity.