事件循环 vs 多线程阻塞 IO

发布于 2024-07-22 07:36:09 字数 578 浏览 3 评论 0原文

我正在阅读有关服务器架构的评论。

http://news.ycombinator.com/item?id=520077

在此评论中,这个人说了三件事:事实

  1. 证明,事件循环一次又一次地在大量低活动连接方面真正发挥作用。
  2. 相比之下,与事件循环相比,具有线程或进程的阻塞 IO 模型一次又一次地被证明可以减少每个请求的延迟。
  3. 在负载较轻的系统上,这种差异是难以区分的。 在负载下,大多数事件循环选择放慢速度,大多数阻塞模型选择减轻负载。

这些都是真的吗?

还有另一篇文章,标题为“为什么事件是一个坏主意(对于高并发服务器)”

http://www.usenix.org/events/hotos03/tech/vonbehren.html

I was reading a comment about server architecture.

http://news.ycombinator.com/item?id=520077

In this comment, the person says 3 things:

  1. The event loop, time and again, has been shown to truly shine for a high number of low activity connections.
  2. In comparison, a blocking IO model with threads or processes has been shown, time and again, to cut down latency on a per-request basis compared to an event loop.
  3. On a lightly loaded system the difference is indistinguishable. Under load, most event loops choose to slow down, most blocking models choose to shed load.

Are any of these true?

And also another article here titled "Why Events Are A Bad Idea (for High-concurrency Servers)"

http://www.usenix.org/events/hotos03/tech/vonbehren.html

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

只想待在家 2024-07-29 07:36:09

通常,如果应用程序预计处理数百万个连接,您可以将多线程范例与基于事件的范例结合起来。

  1. 首先,生成 N 个线程,其中 N == 计算机上的核心/处理器数量。 每个线程都会有一个它应该处理的异步套接字列表。
  2. 然后,对于来自接受器的每个新连接,将新套接字“负载平衡”到具有最少套接字的线程。
  3. 在每个线程中,对所有套接字使用基于事件的模型,以便每个线程实际上可以“同时”处理多个套接字。

通过这种方法,

  1. 您永远不会产生一百万个线程。 您只需系统能够处理的数量即可。
  2. 您可以使用基于事件的多核而不是单核。

Typically, if the application is expected to handle million of connections, you can combine multi-threaded paradigm with event-based.

  1. First, spawn as N threads where N == number of cores/processors on your machine. Each thread will have a list of asynchronous sockets that it's supposed to handle.
  2. Then, for each new connection from the acceptor, "load-balance" the new socket to the thread with the fewest socket.
  3. Within each thread, use event-based model for all the sockets, so that each thread can actually handle multiple sockets "simultaneously."

With this approach,

  1. You never spawn a million threads. You just have as many as as your system can handle.
  2. You utilize event-based on multicore as opposed to a single core.
紫﹏色ふ单纯 2024-07-29 07:36:09

不确定你所说的“低活动”是什么意思,但我相信主要因素是你实际上需要做多少事情来处理每个请求。 假设是单线程事件循环,当您处理当前请求时,其他客户端都不会处理其请求。 如果您需要做很多事情来处理每个请求(“很多”意味着需要大量 CPU 和/或时间的事情),并且假设您的机器实际上能够有效地执行多任务(花费时间并不意味着等待共享)资源(例如单 CPU 机器或类似机器),您可以通过多任务处理获得更好的性能。 多任务处理可以是多线程阻塞模型,但它也可以是单任务事件循环,收集传入的请求,将它们分派给多线程工作工厂,该工厂将依次处理这些请求(通过多任务处理)并尽快向您发送响应。

我不认为与客户端的慢速连接有那么重要,因为我相信操作系统会在您的应用程序之外有效地处理该问题(假设您没有阻止与最初发起请求的客户端进行多次往返的事件循环) ,但我自己还没有测试过。

Not sure what you mean by "low activity", but I believe the major factor would be how much you actually need to do to handle each request. Assuming a single-threaded event-loop, no other clients would get their requests handled while you handled the current request. If you need to do a lot of stuff to handle each request ("lots" meaning something that takes significant CPU and/or time), and assuming your machine actually is able to multitask efficiently (that taking time does not mean waiting for a shared resource, like a single CPU machine or similar), you would get better performance by multitasking. Multitasking could be a multithreaded blocking model, but it could also be a single-tasking event loop collecting incoming requests, farming them out to a multithreaded worker factory that would handle those in turn (through multitasking) and sending you a response ASAP.

I don't believe slow connections with the clients matter that much, as I would believe the OS would handle that efficiently outside of your app (assuming you do not block the event-loop for multiple roundtrips with the client that initially initiated the request), but I haven't tested this myself.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文