如何用C实现RFC 3393(Ipdv数据包延迟变化)?

发布于 2024-07-13 13:02:48 字数 211 浏览 7 评论 0原文

我正在构建一个以太网应用程序,其中我将从一侧发送数据包并在另一侧接收数据包。 我想像 RFC 3393 中那样计算接收端数据包的延迟。因此,我必须在发送端的数据包中添加时间戳,然后在收到数据包后立即在接收端获取时间戳。 减去值 i 将得到时间戳的差值,然后减去该值与随后的差值 i 将得到单向 ipdv 延迟。 两个时钟不同步。 因此,非常感谢任何帮助。 谢谢。

I am building an Ethernet Application in which i will be sending packets from one side and receiving it on the other side. I want to calculate delay in packets at the receiver side as in RFC 3393. So I have to put a timestamps in the packet at the sender side and then take the timestamps at the receiver side as soon as i receive the packet . Subtracting the values i will get the difference in timestamps and then subtracting this value with subsequent difference i will get One way ipdv delay . Both the clocks are not synchronized .
So any help is greatly appreciated.
Thank you.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

罪歌 2024-07-20 13:02:48

RFC 3393 用于测量数据包延迟的方差,而不是用于测量延迟本身。

举个例子:您正在编写一个视频流应用程序。 您希望缓冲尽可能少的视频数据(以便视频尽快开始播放)。 假设数据总是总是需要 20 毫秒才能从机器 A 传输到机器 B。在这种情况下(假设机器 A 可以按照播放需要的速度发送视频数据),您不需要根本不需要任何缓冲区。 一旦您收到第一帧,您就可以开始播放,因为您知道需要下一帧时,它已经到达(因为数据总是需要 20 毫秒才能到达,并且机器 A 至少发送玩的时候就快)。

无论 20 毫秒有多长,只要它始终相同,这都有效。 它可能是 1000 毫秒 - 第一帧需要 1000 毫秒才能到达,但是您仍然可以在它到达后立即开始播放,因为下一帧也将花费 1000 毫秒并且是在第一帧之后发送的 - 换句话说,它已经在其上方式并会暂时到达这里。 显然现实世界不是这样的。

采取另一个极端:大多数时候,数据会在 20 毫秒内到达。 除非有时需要 5000 毫秒。 如果您不保留缓冲区,并且第 1 帧到第 50 帧的延迟为 20 毫秒,那么您可以毫无问题地播放前 50 帧。 然后第 51 帧需要 5000 毫秒才能到达,并且您将在 5000 毫秒内没有任何视频数据。 用户去访问另一个网站来观看他们可爱的猫视频。 您真正需要的是 5000 毫秒的数据缓冲区 - 那么就可以了。

长长的例子,简短的一点:您对数据包的绝对延迟不感兴趣,您感兴趣的是该延迟的方差 - 这就是如何你的缓冲区必须很大。

要测量绝对延迟,您必须使两台机器上的时钟同步。 机器 A 会发送一个时间戳为 12337849227 28 的数据包,当该数据包在时间 12337849227 48 到达机器 B 时,您就会知道该数据包花了 20 毫秒才到达那里。

但由于您对方差感兴趣,因此您需要(如 RFC 3393 所描述的)来自机器 A 的多个数据包。机器 A 发送时间戳为 1233784922 72 8 的数据包 1,然后10 毫秒后发送时间戳为 1233784922 73 8 的数据包 2,然后 10 毫秒后发送时间戳为 1233784922 74 8 的数据包 3。

机器 B 在它认为的时间戳 1233784922 < 处接收数据包 1。 strong>12 8. 在这种情况下,机器 A 和机器 B 之间的单向延迟(从机器 B 的角度来看)为 -600ms。 这显然完全是垃圾,但我们不在乎。 机器 B 在它认为的时间戳 1233784922 15 8 接收数据包 2。单向延迟为 -580ms。 机器 B 在它认为的时间戳 1233784922 16 8 接收数据包 3。单向延迟再次为 -580ms。

如上所述,我们不关心绝对延迟是多少 - 所以我们甚至不关心它是否是负数,或者三个小时,或者其他什么。 我们关心的是延迟量变化了 20ms。 所以你需要一个20ms的数据缓冲区。

请注意,我在这里完全掩盖了时钟漂移的问题(也就是说,机器 A 和 B 上的时钟以稍微不同的速率运行,因此,例如,机器 A 的时间以实际每秒 1.00001 秒的速率前进)通过)。 虽然这确实会导致测量不准确,但其实际效果在大多数应用中不太可能成为问题。

RFC 3393 is for measuring the variance in the packet delay, not for measuring the delay itself.

To give an example: you're writing a video streaming application. You want to buffer as little video data as possible (so that the video starts playing as soon as possible). Let's say that data always always always takes 20ms to get from machine A to machine B. In this case (and assuming that machine A can send the video data as fast as it needs playing), you don't need any buffer at all. As soon as you receive the first frame, you can start playing, safe in the knowledge that by the time the next frame is needed, it will have arrived (because the data always takes exactly 20ms to arrive and machine A is sending at least as fast as you're playing).

This works no matter how long that 20ms is, as long as it's always the same. It could be 1000ms - the first frame takes 1000ms to arrive, but you can still start playing as soon as it arrives, because the next frame will also take 1000ms and was sent right behind the first frame - in other words, it's already on its way and will be here momentarily. Obviously the real world isn't like this.

Take the other extreme: most of the time, data arrives in 20ms. Except sometimes, when it takes 5000ms. If you keep no buffer and the delay on frames 1 through 50 is 20ms, then you get to play the first 50 frames without a problem. Then frame 51 takes 5000ms to arrive and you're left without any video data for 5000ms. The user goes and visits another site for their cute cat videos. What you really needed was a buffer of 5000ms of data - then you'd have been fine.

Long example, short point: you're not interested in what the absolute delay on the packets is, you're interested in what the variance in that delay is - that's how big your buffer has to be.

To measure the absolute delay, you'd have to have the clocks on both machines be synchronised. Machine A would send a packet with timestamp 12337849227 28 and when that arrived at machine B at time 12337849227 48, you'd know the packet had taken 20ms to get there.

But since you're interested in the variance, you need (as RFC 3393 describes) several packets from machine A. Machine A sends packet 1 with timestamp 1233784922 72 8, then 10ms later sends packet 2 with timestamp 1233784922 73 8, then 10ms later sends packet 3 with timestamp 1233784922 74 8.

Machine B receives packet 1 at what it thinks is timestamp 1233784922 12 8. The one-way delay between machine A and machine B has in this case (from machine B's perspective) been -600ms. This is obviously complete rubbish, but we don't care. Machine B receives packet 2 at what it thinks is timestamp 1233784922 15 8. The one-way delay has been -580ms. Machine B receives packet 3 at what it thinks is timestamp 1233784922 16 8. The one-way delay was again -580ms.

As above, we don't care what the absolute delay is - so we don't even care if it's negative, or three hours, or whatever. What we care about is that the amount of delay varied by 20ms. So you need a buffer of 20ms of data.

Note that I'm entirely glossing over the issue of clock drift here (that is, the clocks on machines A and B running at slightly different rates, so that for example machine A's time advances at a rate of 1.00001 seconds for every second that actually passed). While this does introduce inaccuracy in the measurements, its practical effect isn't likely to be an issue in most applications.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文