多人UDP网络策略,需要建议
我正在尝试为实时 3D 游戏创建一个 C++ 插件。尽管我相信已经牢牢掌握了 UDP 的理论、它的工作原理、它的优点和缺点,但我最关心的问题是性能、可扩展性和可能的统计数据。我知道我对 UDP 甚至 TCP 的了解可能只是沧海一粟。
问题:
给定特定场景,典型的专用服务器在任何时候能够处理多少玩家。
现在来说场景......
假设我们有一款 MMORPG 游戏,所有玩家都可以在“游戏世界”的任何地方。每个人都向同一个服务器/服务器中心发送和接收数据,因为当他们的路径最终交叉时,每个人都必须能够...看到其他人并与其他人交互。这是一款实时第一人称游戏,因此玩家的位置必须是最新的、非常及时的。
假设我们有 1000(甚至 10000)名在线玩家...
这里需要发生三件主要事情:
每个玩家通过 UDP 将数据传输到游戏服务器,假设每秒发送 14 次。简而言之,这些数据包括每个玩家的身份、地点和身份。发送的数据已标准化并优化了大小和速度,以鼓励最小的带宽使用。
服务器每秒接收多达 1000 个(用于演示目的的非虚构数字)这些数据包 14 次,从而每秒处理 14 000 个数据包。该处理阶段通常涉及更新中央存储器数据结构,其中玩家旧的x、y、z位置数据将用他的新位置和时间戳进行更新。服务器上的此数据结构包含整个游戏世界中所有玩家的所有数据。
服务器(可能是一个单独的线程,甚至可能是一台单独的机器)现在需要将数据包广播给所有其他玩家,以便他们可以更新屏幕以在地图上显示其他玩家。这也每秒发生 14 次。 (其中 14 通常可能是一个动态数字,根据正在使用的 CPU 容量、繁忙的 CPU、较低的帧速率而变化,反之亦然)。
重要的是:对于玩家 X 来说,只有在其位置可视范围内的其他玩家的数据才会发送给相应的玩家。因此,如果玩家 Y 在 2 英里之外,他的数据需要发送到 X,但如果玩家 Z 在地球的另一端,他的数据不会发送到 X,以节省带宽。这当然涉及更多的处理,因为必须使用最有效的索引解决方案来迭代和过滤数据。
现在我担心的是,从客户端计算机发送数据包,将其放入服务器 RAM,执行一点点处理更新数据,并有选择地将信息广播给其他玩家,需要时间。这意味着服务器能够处理的阈值存在一定的阈值,是的,取决于我的实现的有效性、所使用硬件的速度和能力,当然还有其他外部因素,例如互联网速度、交通和编号。每秒撞击地球的太阳耀斑......开玩笑。
我试图从经历过这个过程的其他人那里了解其中的陷阱是什么,以及在创建多人游戏插件时我可以期望的典型性能。
我可以轻松地说:“我想满足 10000 人同时在同一台服务器上玩的需求”,而您可能会说:“每台服务器 100 人是一个更现实、更可能的数字。”
所以我知道我可能必须拿出一个多服务器/云计算中心来处理我的数千个请求和调度,将处理负载分配到多台机器上。所以我可能有几台只处理接收数据的机器,一个巨大的中央盒子,就像一个内存数据库,由所有接收和发送机器以某种方式共享,当然还有一系列发送机器。
显然,存在技术限制,我真的不知道会发生什么以及它们是什么。投入额外的 CPU 和服务器来解决问题并不一定能解决问题,因为机器之间更多的相互通信也会稍微减慢进程。我想你投入的 CPU 越多,可能会降低效率,甚至在某个阈值上逆转 CPU 生产力。
我可以而且应该考虑多人游戏的 P2P(点对点)!
我说我能够同时满足 2500 名玩家的需求是否现实?
几年内玩家数量是否有可能扩大到 10000 名?
我知道这个问题太长了,所以请接受我诚挚的歉意。
I'm attempting to create a C++ plugin for a realtime 3D game. Although I believe to have a firm grasp on the theory of UDP, how it works, what its strengths and weaknesses are, my primary matter of concern is performance, scalability and likely statistics. I am aware that I probably know only about a drop in the oceans worth when it comes to UDP and even TCP.
The question:
Given a certain scenario, how many players would a typical dedicated server(s) be able to cope with at any one time.
Now for the scenario...
Let's imagine we have a MMORPG game where all players can be anywhere in the "game world". Everybody sends and receives data to the same server / server hub as everybody must be able to...see and interact with everybody else, when their paths eventually cross. It's a real time 1st person game, so player positions must be up to date, very timeously.
Lets say we have 1000 (or even 10000) players online...
Three primary things need to happen here:
Each player streams their data to the game server via UDP, at say 14 sends per second. In a nutshell, this data includes, who, where and what each player is. The data being sent has been normalized and optimized for size and speed to encourage minimal bandwidth usage.
The server receives for example up to 1000 (a non-fictional figure for demonstrational purposes) of these packets 14 times per second, thus processing 14 000 packets per second. This processing phase typically involves updating the central memory data stucture, where a players old x,y,z position data will be updated with his new position and a timestamp. This data structure on the server contains ALL data for ALL players in the ENTIRE game world.
The server (possibly a separate thread, maybe even a separate machine) now needs to broadcast the packets to all the other players, so they can update their screens to show other players on the map. This also, happens 14 times per second. (where 14 might typically be a dynamic figure, changing based upon the CPU capacity being used, busy CPU, less framerate and vice versa).
The important thing is this: for Player X, only the data of other players within visual range of his position, are dispatched to that respective player. So if Player Y is 2 miles away, his data needs to be sent to X, but if Player Z is on the other side of the planet, his data is not dispatched to X as an attempt to save bandwidth. This of course involves a little bit more processing as data would have to be iterated and filtered, using the most effective indexing solution possible.
Now my concern is this, sending a data packet from a client machine, getting it into the servers RAM, doing the slight tiny bit of processing updating the data, and selectively broadcasting the info to other players, takes time. This meaning, that there is a certain threshold that a server will be able to handle, and yes, depends on the effectiveness of my implementation, the speed and abilities of the hardware being used, and of course, other external factors like, internet speed, traffic and nr. of solar flares hitting the earth per second...just kidding.
I'm trying to find out from others, who have gone through this process, what the pitfalls are, and what typical performance I can expect when creating a multiplayer plugin.
I could easily say: "I want to cater for 10000 people playing on the same server at the same time", and you might say: "100 is a more realistic and probable figure, per server."
So I am aware that I might have to come up with a multiple server / cloud computing hub for dealing with my thousands of requests and dispatches, distributing the processing load over multiple machines. So I might have a few machines dealing only with receiving data, a huge central box, which is like a in memory database shared somehow by all the receiving and dispatching machines, and then of course a series of dispatching machines.
Obviously, there are technical limitations, and I don't really know what to expect and what they are. And throwing extra CPU's and server boxes at the problem will not neccessarily solve the problem, as more intercommunication between machines will also slow the process down a bit. I suppose the more CPU's you throw at it, might reduce effectiveness and even reverse CPU productivity at some threshold.
Could and should I consider P2P (Peer To Peer) for multiplayer!
Am I being realistic saying that I will be able to cater for 2500 players at any one time?
Would it be possible to scale up to 10000 players in a few years time?
I know this question is dreadfully long, so please do accept my sincere apologies.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
缩放问题是完全合理的。然而,对 UDP 的关注是错误的。这不会成为您的主要问题。
原因是玩家与玩家之间的交互从根本上来说是一个 O(N*N) 问题。另一方面,服务器带宽是一个 O(N) 问题。考虑到现代网络服务器可以通过 TCP 上的 HTTP 来满足 1 Gbit 以太网的需求,UDP 的较低开销意味着只要您的计算能够进行,您可能就能够用 UDP 来满足 1 Gbit 以太网的需求。
The scaling question is entirely legitimate. The focus on UDP, however, is misplaced. It is not going to be the main problem for you.
The reason is that player-player interactions are fundamentally an O(N*N) problem. Server bandwidth on the other hand is an O(N) problem. Considering modern webservers can staurate 1Gbit Ethernet with HTTP over TCP, the lower overhead of UDP means that you're probably going to be able to saturate 1 Gbit Ethernet with UDP as long as your computations hold up.
我可以并且应该考虑将 P2P(点对点)用于多人游戏吗? 我不认为 p2p 技术能够处理游戏网络的实时方面。此外,在通常的 p2p 网络中,您不会同时连接到数千个成员,但您通常会连接到一些上游节点,因此它更像是一个图,而不是一个非常扁平的树。
我说我能够同时满足 2500 名玩家的需求是否现实?不是在单个服务器上。然而,通过将您的用户分布到多个服务器上,您已经可以在游戏世界中按地理区域(例如按大陆或国家)过滤他们(如果游戏世界非常大)。为了实现低延迟,您无论如何都希望将服务器保持在用户真实位置附近 - 如果您住在美国,则不会在欧洲服务器上玩游戏,反之亦然。
是否有可能在几年内扩展到 10000 名玩家?有很多方法可以优化数据的编码和传输方式。仅发送游戏世界状态的增量、玩家移动的客户端预测、网络级别的广播、服务器端的云计算等,并且在未来几年中,尤其是还会有更多。当游戏行业转向 OnLive 等基于云的计算平台时,我们显然需要更高效的算法和基础设施来应对这一数量。
Could and should I consider P2P (Peer To Peer) for multiplayer? I don't think that p2p technology is able to handle the real-time aspects of game networking. Also, in the usual p2p networks, you are not connected to thousands of members at once, but you're usually connected to some upstream nodes so it's more a graph than a very flat tree.
Am I being realistic saying that I will be able to cater for 2500 players at any one time? Not on a single server. However, by distributing your users onto multiple servers you already can filter them by geographic region (e.g. by continent or country) within the game world if it's a very large world. For low latency you would anyway want to keep the servers near the real locations of the users - you don't play on European servers if you live in the US and vice versa.
Would it be possible to scale up to 10000 players in a few years time? There are many ways to optimize the way the data is encoded and transmitted. Sending only deltas of the game world state, Client-side prediction of player movement, Broadcasting on the network level, cloud-computing on the server side etc. and there will be more in the next few years, esp. when the gaming industry reaches out to the cloud-based computing platforms like OnLive it becomes apparant that we need more efficient algorithms and inrastructure to cope with that amounts.
P2P的问题归根结底是最终用户的连接。 ISP 通常不会为您提供大量上传,在很多情况下 <<下载速度的 1/10。许多用户都位于 NAT 之后,因此您需要为客户端设置某种形式的代理来发起连接。您将需要处理用户断开连接和数据包丢失(对于糟糕的无线网络上不可避免的节点,该节点会丢弃一半的数据包)。您需要一种按 ISP/位置对客户端进行分组的好方法,这样它们之间的 ping 时间就不会超过 200 毫秒。
在我看来,这听起来像是一场即将发生的灾难。您可能最好使用众所周知的网络库(和传统的客户端/服务器架构),然后尝试发明方轮。仅传输需要更新的内容(请注意大多数 MMO 包含大型静态世界,而动态对象很少)。
The problem with P2P is ultimately the end users' connection. ISPs typically don't give you a lot of upload, in a lot of cases < 1/10 of of your download speed. A lot of users are behind NAT, so you are going to need to setup some form of proxy for clients to initiate connections. You will need to handle user disconnects and packet loss (for the inevitable node that is on crappy wireless that drops half the packets). And you will need a good way to group clients by ISP/Location so they don't have 200ms+ pings between each other.
IMO it sounds like a disaster waiting to happen. You are probably better off going with a well known networking library (and traditional client/server architecture) then trying to invent a square wheel. Only transmit what needs to be updated (notice how most MMOs contain large static worlds with few dynamic objects).
扩展问题是 MMO 面临的最困难的挑战之一,但这一问题已得到部分解决。有很多关于如何跟踪和更新用户信息的示例。
但我要提到的一点是,从历史上看,游戏是一种社交事物,因此存在一种大多数人倾向于聚集在一个中心或单个区域的模式。所以你真的必须针对这种最坏的情况进行设计。
有些游戏确实追求巨大的史诗般的感觉,并且允许所有用户分组和聚集在一起是核心设计要求。对于这种类型的游戏,请计划所有用户都位于完全相同的位置。对于其他游戏,您应该能够将它们分成更小的组并分而治之。
The scaling issue is one of the most difficult challenges for MMO's and is one that has been partially solved. There are many examples of how to track and update user info.
One point I'll mention though is that historically, games are a social thing and as such there is a pattern of a majority of the people tend to cluster together in a central or single area. So you really have to design for this worst case.
Some games are really going for a huge epic feeling, and having all the users allowed to group and bunch together is a core design requirement. For this type of game, plan on all the users being in the exact same spot. For other games, you should be able to break them into smaller groups and divide and conquer.
我可以并且应该考虑多人游戏中的 P2P(点对点)吗 - 不,这最多会让你面临作弊和各种可靠性问题。这是一罐蠕虫,最好不要打开。但是,如果您有这样的顾虑,它可能会帮助您解决内容分发问题。
我说我能够同时满足 2500 名玩家的需求是否现实? - 当然,但重点是如何实现它。在 90 年代中期,像《Realms of Despair》或《Medievia》这样的文字游戏可以同时支持数百名玩家在线。他们并没有每秒向每个人发送 14 次数据,但他们确实每秒更新这些玩家几次。自那时以来,计算能力增加了约 250 倍。值得深思。
是否有可能在几年内扩展到 10000 名玩家? - 现在就可以做到,如果您放宽带宽要求,这样您就不会总是每秒发送 14 个更新,或者放宽每个人由一台服务器处理的要求。 “C10K 问题”10 多年前就得到了解决。显然FTP客户端不是实时游戏,但另一方面它的吞吐量要求更高。如果您可以容忍一点额外的延迟以换取更高的带宽,那么您就是赢家。
Could and should I consider P2P (Peer To Peer) for multiplayer - no, that opens you up to cheating and all sorts of reliability issues at best. It's a can of worms best left unopened. It might help you out with content distribution however, if that's a concern you have.
Am I being realistic saying that I will be able to cater for 2500 players at any one time? - definitely, but the emphasis is on how you implement it. In the mid 90s, text games like Realms of Despair or Medievia were handling hundreds of players online simultaneously. They didn't send data out to everybody 14 times a second, but they did update those players several times a second. Computing power has increased by a factor of about 250 since then. Food for thought.
Would it be possible to scale up to 10000 players in a few years time? - it's possible to do it now, if you relax your bandwidth requirements so that you're not always sending 14 updates a second, or relax the requirement that everybody is handled by 1 server. The 'C10K problem' was addressed over 10 years ago. Obviously an FTP client is not a real-time game, but on the other hand its throughput requirements are higher. If you can tolerate a little extra latency in return for higher bandwidth then you're onto a winner.