使用 NodeJS child_processes.spawn 运行脚本时打开文件过多

发布于 2024-11-08 01:23:54 字数 584 浏览 0 评论 0原文

场景:

使用主脚本多次生成可变数量的子进程,以便对服务器执行负载测试。

主脚本最初生成它可以生成的所有子进程(根据其配置设置),然后当子进程退出时,如果配置请求更多运行,则将启动新的子进程。

我看到的是在尝试启动第 83 个子进程时立即失败。 83?

我没有做任何事情来显式关闭作为子生成过程的一部分打开的文件,但大概这不是打开代码的工作,而是 child_processes 模块代码的工作?

我很好奇 82 个子进程的神奇数字。这似乎表明我的系统上的节点限制或节点的某种组合?

理想情况下,我对这个问题的答案缺乏了解,或者有人可以建议一种替代方法来启动不会遇到此问题的脚本子进程?

我也有兴趣了解 NodeJS 即将推出的 Web Worker API 的状态。有人知道这件事吗?

详细信息:

  • NodeJS v0.4.7
  • Mac OS X v10.6.7
  • ulimit -n = 256
  • 将成功运行的生成子级的幻数 = 82 (意味着 > 82 个生成的进程将抛出“打开文件过多”错误)

感谢您的帮助。

Scenario:

Using a master script to spawn a variable number of child processes a variable number of times in order to perform load testing against a server.

The master script initially spawns all the children it can (according to its configuration settings) and then as the children processes exit if there are more runs requested by the config then new children are spun up.

What I'm seeing is an immediate failure upon attempting to spin up the 83rd child process. 83?

I'm not doing anything to explicitly close the files opened as part of the child spawning process but presumably that's not the job of the opening code but the child_processes module code?

I'm very curious about the magic number of 82 child processes. This seems to indicate something about either a limitation in node or some combination of node on my system?

Ideally, there's some lack of knowledge I have that this question will answer or someone can suggest an alternative way to launch child processes of scripts that won't suffer this issue?

I'm also interested in learning about the status of the Web Worker API that is coming to NodeJS. Anyone know anything about that?

The Details:

  • NodeJS v0.4.7
  • Mac OS X v10.6.7
  • ulimit -n = 256
  • magic number of spawned children that will run successfully = 82
    (meaning that > 82 spawned procs will throw the "too many open files" error)

Thanks for any help.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

朮生 2024-11-15 01:23:54

我的猜测是系统正在按照您所说的进行操作。 82 个进程意味着每个进程打开 3 个文件。标准输入、标准输出、标准错误。砰。您仅使用标准 3 个文件描述符就达到了 ulimit。使用 ulimit -n 512 运行,我敢打赌你将能够运行两倍的孩子。

My guess is the system is doing exactly what you are telling it. 82 processes is 3 open files per process. STDIN, STDOUT, STDERR. Bang. You've hit your ulimit just with the standard 3 file descriptors. Run with ulimit -n 512 and I bet you'll be able to run twice as many children.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文