每场比赛的 Node.js 进程

发布于 2024-12-14 12:43:34 字数 285 浏览 0 评论 0原文

这是与之前相同的问题:

node.js 子进程

我问 关于我是否应该在我的 Node.js 游戏的每场比赛中使用子进程。

但我意识到之前我忽略了一些非常重要的细节。

该游戏允许玩家以某些有限的方式操纵游戏规则。然而,这仍然可能导致无限循环/内存泄漏/停顿和崩溃。

每场比赛 1 个进程是一个可扩展/合理的想法吗?

This is the sort of the same question as before:

node.js child processes

I'm asking about whether or not I should use a child process per match for my node.js game.

But I realized that previously I neglected to include some very important details.

The game allows players to manipulate game rules in certain limited ways. However this can still lead to infinite loops / memory leaks / stalls and crashes.

Is 1 process per match a scalable / reasonable idea?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

樱桃奶球 2024-12-21 12:43:34

如果任何单个游戏进程可以耗尽所有内存或 CPU,则这是不可扩展的。如果您的服务器是 8 核机器,八个游戏会占用所有 CPU 时间,您无能为力,除了通过 top 监视进程并根据需要杀死它们 - 但这会导致崎岖不平的服务器。

现在,如果您能够从一开始就设法防止这种情况发生(对我来说听起来是一个更好的主意),那么它是可行的。每个进程将占用 30MB 以上的内存,因此每几百个进程就需要一台强大的服务器。以 http://site.nodester.com 为例,它们似乎在一个计算机上运行大约 600 个进程。单机。他们的软件堆栈也是开源的: https://github.com/nodester/nodester

node v0.8 将带来隔离(无共享线程),它可能会比子进程使用更少的资源过程。

一个更“严肃”的解决方案是使用某种虚拟化,例如 OpenVZ ,它允许您设置资源限制,然后只需保留一个可用的虚拟服务器池,每个游戏都有它自己的。它并不像看起来那么重,每台服务器的开销约为 18mb,每台机器可以托管数百个服务器,尽管它的设置要复杂得多。

If any single game process can eat up all the memory or CPU, this isn't scalable. If your server is an 8-core machine, eight games can take all of the CPU time, there's nothing you can do, except for monitoring processes via top and killing them as needed - but that would make for a bumpy server.

Now, if you manage to prevent this stuff in the first place (sounds like a better idea to me), it is viable. Each process will take upwards of 30mb of memory, so you'll need a beefy server for every couple hundreds. Look at http://site.nodester.com for an example, they seem to be running around 600 processes on a single machine. Their software stack is open source too: https://github.com/nodester/nodester

node v0.8 will bring Isolates (shared-nothing threads), which will probably use less resources than a child process.

A more "serious" solution to this would be using some kind of virtualization like OpenVZ that will allow you to set resource limits, then just keep a pool of virtual servers available, every game gets it's own. It's not as heavy as it looks, it's ~18mb overhead per server and you can host hundreds per machine, although it's a much more complex setup.

莫相离 2024-12-21 12:43:34

简短的回答是否定的,它不会扩展!

长答案

让我们首先看看可扩展性。我采用 wikipedia 提出的定义:“可扩展性是系统、网络、或流程,以优雅的方式处理不断增长的工作量,或者扩大工作量以适应这种增长的能力。”

万一您的某个进程可以消耗调度程序授予的 CPU 数量(有关 Linux 调度程序的更多详细信息)您的系统将无法以“优雅的方式”进行扩展!因此,您需要的可扩展性是上面 Ricardo Tomasi 提出的设置,其中每场比赛都需要它自己的虚拟机。但这并不优雅,而且考虑到成本,这不是可行的解决方案。

系统无法扩展的问题在于其背后的算法,没有架构可以解决这个问题,但算法需要修复。

修复算法的选项

  • 尝试在游戏循环中使用某种阻塞机制
  • 计数器来检测无限循环
  • 构建一个具有有限槽位的事件队列,以便添加其他事件将引发异常
  • 在游戏循环中使用时隙算法,就像每场比赛一样可以消耗一个 Node.js 进程的 1/count(matches) 时间(但避免构建自己的调度程序)

,即使你的算法是固定的,为每场比赛生成一个进程也会占用你有限的 RAM 资源的一些 MB,这不是在可扩展性的意义上是优雅的。

The short answer is no, it won't scale!

The long answer

Let's first look at scalability. I'm taking the definition proposed by wikipedia: "scalability is the ability of a system, network, or process, to handle growing amount of work in a graceful manner or its ability to be enlarged to accommodate that growth."

In case one of your processes can eat up as much CPU as it's granted by the scheduler (More details on the linux scheduler) your system won't scale in a "graceful manner"! So what you would need for scalability is a setup as proposed by Ricardo Tomasi above where every match would need it's own VM. But that's not graceful and taking cost into consideration this is no viable solution.

The problem why your system won't scale is the algorithm behind it, no architecture can fix that but the algorithm needs to be fixed.

Your options to fix the algorithm

  • Try to use some blocking mechanism in the game loop
  • Counters to detect infinite loops
  • Build an event queue that has limited slots so that adding additional events will throw exceptions
  • Use a time slots algorithm for your game loop somehow like every match can consume 1/count(matches) time of one node.js process (but avoid building your own scheduler)

And even when your algorithm is fixed, spawning a process for each match will eat up some MB of your limited RAM resource which is not graceful in the sense of scalability.

爱冒险 2024-12-21 12:43:34

我认为参与者模型绝对应该提供可扩展性:

  • 使用具有负载平衡机制的进程池
  • 使用ZeroMQ或类似交换消息的东西
    • 您将需要一些沟通渠道这里
    • 使用请求/响应进行握手和控制
    • 您可以对主频道使用多播
    • 如果您也有特定用途,也可以使用发布/订阅

您可能需要一个主流程来平衡和控制所有流程,跟踪事件等。使用您的参与者模型可以跨机器网络横向扩展,事实上您如果您愿意,甚至可以拥有对等集群,但可能需要使用 RSA 密钥进行身份验证。我建议从单一主人开始
只需等待进程连接,即可实现工作人员骨架并查看如何实现事物的控制端。初学者坚持用 2 个工人掌握 master,这样调试起来更简单。

对于 Web 前端,您还可以使用代理,例如 Nginx,它将调用主服务器,然后主服务器将告诉它将新客户端引导到哪里。我想您需要实现一个 UI 模块,然后在工作线程中使用它。
我的意思是,master 不会呈现任何 UI,并且您的工作人员将侦听不同的端口,尽管 Nginx 会将其隐藏起来,远离用户(他不会在 URL 栏中看到任何端口),您也可以在此基础上还实现了 RESTful API。

I think that actor model should definitely provide scalability:

  • Use a process pool with a load balancing mechanism
  • Use ZeroMQ or something similar to exchange message
    • you will need a few communication channels here
    • use request/response for handshaking and control
    • you can use multicasting for the main channel
    • you can also use publish/subscribe if you have specific use for it too

You'll probably need a master process to do balancing and control all of the processes, keep track of events etc. Using the actor model you can scale out across a network of machines and in fact you can even have peer-to-peer clusters if you desire to do so, might need to use RSA keys for authentication though. I'd recommend to start with a single master
just waiting for processes to connect, the implement the worker skeleton and see how to implement the control side of things. Stick to master with 2 workers for starters, it's simpler to debug.

For the web front end you can also use a proxy, such as Nginx, which will call the master and then the master will tell it where to direct the new client. I suppose you would need to implement a UI module and then use it from within the worker.
I mean that the master will present no UI, and your workers will listen on different ports, though Nginx will hide it away from the user (he will see no ports in the URL bar) and you could also implement a RESTful API on top of that too.

善良天后 2024-12-21 12:43:34

这是一个很难回答的问题,因为我们不知道你的游戏会做什么......

如果它崩溃,所有游戏都会崩溃,所以我认为拥有多个进程是一个好主意。

但我没有看到任何其他好的理由为什么你应该有多个进程。 (也许大量的阻塞操作,如巨大的数据库事务、处理巨大的文件......等)

就像@Samyak Bhuta所说,您可以使用forever或cluster来重新启动您的进程。为此,我们使用 monit

This is a hard question to answer, because we don't know what your game do ...

If it crash, all games will crash, so i assume having multiple process is a good idea.

But i dont see any other good reason why you should have multiple process. ( Maybe a ton of blocking operation like huge DB transaction, process huge file ... etc)

Like @Samyak Bhuta said, you could use forever or cluster to restart your process. We are using monit for this

走过海棠暮 2024-12-21 12:43:34

这里有很多事情需要讨论。

有多少玩家可以连接到一场比赛?

您使用什么类型的数据库?它的写入速度快吗?

重新启动进程是最后一个解决方案,因为如果您的游戏中一切都应该快速发生,并且您重新启动进程,则玩家重新连接到它之前将需要几秒钟的时间。

我不认为每场比赛的一个进程是可扩展的,例如,当您同时有 50.000 场比赛时会发生什么?我想说,更好的解决方案是按两个标准对子进程上的匹配进行分组:
a) 通过匹配 id(某种分片算法)
b) 如果越来越多的玩家出现,则根据玩家的数量旋转另一个进程(甚至更多)。

在对游戏进行一些测试之前很难决定要做什么。您应该真正测试它,看看它在几次真实比赛中的表现如何(它“消耗”了多少 CPU、内存),并根据数据进行一些分析。没有人能确切地说出在这种情况下该怎么做,因为这取决于项目。

There are multiple things to discuss here.

How many players can connect to a match?

What kind of database are you using? Does it have fast writes?

Restarting a process is the last solution, because if you have a game where everything should happen fast and you restart the process it will take a couple of seconds before the players reconnect to it.

I don't think one process per match is scalable, what happens when you have 50.000 matches at the same time for example? I would say that a better solution would be to group matches on a child process by 2 criteria:
a) by the match id (some kind of sharding algorithm)
b) if more and more players come popping up spin another process (or even more), based on the number of players.

It's really hard to decide what to do before making some tests on your game. You should really test it to see how it behaves several real matches (how much CPU it "eats", memory) and make some analysis based on the data. Nobody can say exactly what to do in these kinds of situations, because it depends on the project.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文