如何固定数据包的传输路径,从而使延迟保持静态?
据我了解,当数据包通过IP协议发送到计算机时,它遵循从SENDER -> SENDER 的路径。路由器-a->路由器-b->路由器-x ->目的地。路由决定数据包到达主机所采用的路径。
我目前正在开发一款使用 UDP 的多人网络游戏。由于游戏是实时的,因此每次收到数据包时,我都需要有效地回滚物理世界,以确定客户端在看到发生的情况时做了什么。为此,我必须回滚物理世界,n 秒,其中 n 是数据包从客户端传输到服务器所花费的时间。延迟是动态的(我想,如果我错了,请纠正我)。
为了优化这个过程,我想知道数据包是否可以始终采用静态路由,以便我可以确定客户端的静态延迟。
As I understand, when a packet is sent to a computer in the IP protocol, it follows a path from SENDER -> ROUTER-a -> ROUTER-b -> ROUTER-x -> DESTINATION. Routing determines what path a packet takes, to get to a host.
I'm currently developing a game, with multiplayer networking, using UDP. Since the game is realtime, I need to effectively rollback the physics world, every time a packet is received, to determine what the client did, when they saw it happening. To do this, I must rollback the physics world, n-seconds, where n is the time is took the packet to travel from client to server. Latency is dynamic (I think, correct me if I'm wrong).
To optimize this process, I'm wondering if it would be possible, for a packet always to take a static route, so I can determine a static latency for the client.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
不,这是不可能的。从根本上来说,互联网并不是这样运作的。每个网络都可以路由数据包,因为它必须基于随时变化的信息。
No, it's not possible. The Internet, fundamentally, doesn't work that way. Each network may route the packets as it must based on information that can change from minute to minute.
我认为这个问题的基本假设是有缺陷的。数据包延迟不会因为路由路径发生变化而波动(当然,这种情况发生时确实会产生影响,但在网络上并不常见),而是由于其他因素(例如拥塞和数据)而波动聚结。
就延迟而言,您的基本敌人实际上是数据包拥塞而不是路由切换,不幸的是,没有办法完美地稳定其中任何一个,因为它们都依赖于您无法控制的因素。路由切换是通过策略来完成的,它可以轻松地拒绝您建立固定路由的任何请求。
最重要的是,让客户了解情况。如果他们选择了800ms延迟的服务器,他们应该知道他们的游戏体验会很差。如果可能,请自动选择寻找低延迟服务器供他们使用。
作弊证明算法
这类问题暗示通过建立基线延迟来制作“作弊证明”算法,并说任何显着的波动都是作弊。从理论上讲,这不是一个糟糕的算法,但它在现实世界中效果不佳,因为由于我们作为程序员无法控制的因素,延迟差异很大。
您真正可用的唯一工具是确保发送的命令不会严重超出游戏的合法参数,并将任何过度延迟的操作推迟到允许的最大限制时间,或者完全放弃它们。特别是在时间敏感的游戏中,如果客户端延迟 10 秒,他们的操作就不太可能仍然相关。
流应用程序
我们使用流缓冲区的原因是延迟的不可预测性。如果您的延迟流量很大,那么您的流缓冲区也必须很大。相反,如果您的延迟通量非常小,则流缓冲区也可能很小。对于像 VOIP 这样的应用,我们希望保持缓冲区尽可能短,但缓冲区相对于流量的大小与缓冲区下溢引起的失真概率之间总是存在权衡。
您的具体需求可能与这两者中的任何一个不同,因此您的解决方案最终可能会有所不同。请记住,除非您从头到尾控制网络,否则您永远无法 100% 控制延迟。 (即:您没有使用互联网来传输数据)
I think the fundamental assumption embedded in this question is flawed. Packet latency doesn't fluctuate because the routed path changes (granted, this does have an effect when it happens, but it's not a frequent occurrence on the net) but rather due to other factors like congestion and data coalescence.
Your basic enemy is really packet congestion rather than route switching, as far as latency and unfortunately, there's no way to stabilize either one perfectly, as they both rely on factors beyond your control. Route switching is done by policy, which can just as easily deny any request of yours to establish a fixed route.
Above all, keep the client in the know. If they chose a server with 800ms latency, they should know their game experience will be very poor. Where possible, make the choice automatic to find a low latency server for them to use.
Cheat Proof Algorithms
This kind of question hints at making a "cheat proof" algorithm by establishing a baseline latency, and saying that any significant fluctuation is cheating. That's not a bad algorithm in theory, but it doesn't work well in the real world, as latency varies WIDELY due to factors beyond our control as programmers.
The only tools really available to you are to ensure the commands being sent aren't wildly outside the game's legal parameters, and bumping any excessively delayed action up to the maximum allowed limit time, or dropping them entirely. In a time sensitive game especially, if the client lags out for 10 seconds, it's unlikely their actions will still be relevant.
Streaming Applications
The reason we use stream buffers is because of the unpredictability of latency. If your latency flux is large, your stream buffer must also be. Inversely, if your latency flux is very small, the stream buffer can be small as well. For applications like VOIP, we like to keep the buffers as short as possible, but there's always a trade off between the size of the buffer relative to the flux, and the probability of distortion caused by buffer underflows.
Your specific need may be different from either of these two, so your solution may end up being different. Just keep in mind that you're never going to be able to control latency 100% unless you control the network from end to end. (IE: you're not using the internet to transport data)
路由器很懒。他们会将数据包转发到之前发送数据包的同一位置,除非更新其路由表以表明需要新的路由。当需要新的路由时,最好让他们沿着不同的路由将数据包发送到目的地。
IP 包括严格和松散源路由选项,但许多站点会丢弃源路由数据包作为一个政策问题。你不能也不应该依赖它们工作。
对于您的根本问题来说,也许更好的是定期检查时钟同步 - 要求您的客户端运行 NTP 是这是一种让所有系统的时间间隔在一两秒之内的好方法。 (您应该能够让所有主机比这更接近,但维基百科页面警告 Windows 实现的时间不能超过 1-2 秒。这很不幸。)
Routers are lazy. They will forward a packet to the same place they already sent previous packets unless their routing tables are updated to indicate that a new route is necessary. When the new routes are necessary, it would be best to let them send the packet to the destination along a different route.
IP includes both strict and loose source routing options, but many sites drop source-routed packets on the floor as a matter of policy. You cannot and should not rely on them working.
Perhaps better for your underlying problem is to check clock synchronization periodically -- asking your clients to run NTP is a good way to get all systems within a second or two of each other. (You should be able to get all hosts much closer than that, but the Wikipedia page warns against the Windows implementation being able to do better than 1-2 seconds. That's unfortunate.)