使用 NBIO 的高效预分叉服务器设计,例如使用 libevent 的 epoll、kqueue

发布于 2024-12-17 07:41:39 字数 671 浏览 4 评论 0原文

我计划编写一个“彗星”服务器,用于将数据“流式传输”到客户端。我过去曾增强过一项功能以利用多核 CPU,但现在我要从头开始。我计划使用 epoll/kqueue 或 libevent 来为服务器供电。

我一直在考虑的问题之一是使用什么服务器设计?由于我计划使用多进程模型来利用所有 CPU 核心,因此我有多个可用选项。

  1. 预分叉多进程 - 每个进程执行自己的接受
  2. 与主进程预分叉多进程 - 主进程接受,然后使用描述符传递将接受的套接字传递给进程
  3. 具有不同端口的预分叉多进程 - 每个进程侦听同一系统上的不同端口。负载均衡器根据来自各个守护进程的一些负载反馈来决定哪个进程获得下一个连接

。设计#2 是最复杂的。设计 #3 很简单,但涉及额外的硬件,无论设计如何,我都需要这些硬件,因为我将在多台机器上运行它,并且无论如何都需要负载平衡器。设计 #1 有雷群问题,但我想雷群对于 8 个进程来说并不是什么大问题,但当客户端不断连接和断开连接时(这种情况应该很少见,因为这是一个彗星服务器),它就变得很重要了。

在我看来,#2 很复杂,并且由于主设备和主设备之间传递描述符而需要 2 个额外的系统调用。每个接受的从属进程。与雷群问题相比,这种开销是否更好?如果我有 8 个进程唤醒并执行接受,如果我采用设计#1,我是否可能会看到 8 个接受调用?

我的设计选择有哪些优点和缺点?你会推荐什么?

I am planning on writing a 'comet' server for 'streaming' data to clients. I have enhanced one in the past to take advantage of the multi-core CPUs but now I'm starting from scratch. I am planning to use epoll/kqueue or libevent to power the server.

One of the issues I have been weighting over is what server design to use? I have several options available since I am planning to use a multi-process model to take advantage of all the CPU cores.

  1. Pre-forked multi-process - each process doing it's own accept
  2. Pre-forked multi-process with master - master process accepts and then uses descriptor passing to pass the accepted socket to a process
  3. Pre-forked multi-process with different ports - Each process listens on a different port on the same system. A loadbalancer decides which process gets the next connection based on some load feedback from the individual daemon processes

Design #2 is most complicated. Design #3 is simple but involves additional hardware that I will need irrespective of the design since I'll have this running on several machines and would require a loadbalancer anyway. Design #1 has the thundering herd issue but I guess thundering herd isn't a big deal with 8 processes but it becomes a big deal when clients constantly connect and disconnecting (which should be rare since this is a comet server).

As I see it, #2 is complicated and requires 2 additional system calls due to descriptor passing between the master & slave processes for each accept. Is it better to have this overhead opposed to the thundering herd problem? If I have 8 processes waking up and executing an accept am I potentially going to see 8 accept calls incase I go with Design #1?

What are the pros and cons of my design choices? What would you recommend?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

没企图 2024-12-24 07:41:39

如果不是进程而是线程,我会选择选项 2。无论如何,对于进程来说,这对我来说看起来很昂贵,所以我们要在 1 和 3 之间进行选择。

如果可以以某种方式估计预期值,我更喜欢 1加载。您能否为预分叉进程中的休眠群的大小设置一个上限?您需要多快才能接受新连接?

因此,如果您打算走汤姆·邓森的路,并将大群动物快速穿过红河到达堪萨斯州,您可能需要选择第三条路。因此,无论如何,资源都是可用的......

If it weren't processes but threads I'd go for option 2. Anyhow for processes this looks expensive to me, so we are to choose between 1 and 3.

I'd prefer 1, if it is possible to somehow estimate the expected load. Could you set an upper limit for the size of the sleeping herd, will say the preforked processes? How fast do you need to be able to accept a new connections?

So if you're going to go the Tom Dunson way, and bring the big herd fast over the Red River down to Kansas you probably need to choose the 3rd way. So as the resources are available anyway ...

温柔戏命师 2024-12-24 07:41:39

如果你的目标是制作一个非常大规模、高吞吐量的 HTTP 守护进程,
#1、#2 和 #3 都不合适。如果你想获得可扩展性,你最好使用带有多线程的 1 对 m 或 m 对 n 模型,就像 nginx/lighttp 那样。

实际上,
如果您希望程序在一秒钟内处理少于一百个连接,那么
#1、#2 和 #3 可能不会产生任何明显的差异。

但是,如果您将来可能通过将进程切换为线程来扩展程序,我会选择#2,因为它可以轻松集成到 1 对 m 或 m 对 n 处理模型中。

If you aim to make a very large-scaled, high-throughput HTTP daemon,
none of #1, #2, and #3 is appropriate. You'd better use 1-to-m or m-to-n models with multi-threading if you wanted to get scalability, like the way nginx/lighttp do.

In fact,
if you expect the program to handle less than a hundred connection within a second, then
#1, #2, and #3 may not make any visible difference.

However, I would go for #2 in case you may scale up your program in the future by switching a process to a thread since it can be easily integrated into 1-to-m, or m-to-n processing models.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文