为什么基于事件的网络应用程序本质上比线程应用程序更快?

发布于 2024-07-09 02:45:43 字数 93 浏览 3 评论 0原文

我们都阅读过基准测试并了解事实 - 基于事件的异步网络服务器比线程对应的服务器更快。 想想 lighttpd 或 Zeus 与 Apache 或 IIS。 这是为什么?

We've all read the benchmarks and know the facts - event-based asynchronous network servers are faster than their threaded counterparts. Think lighttpd or Zeus vs. Apache or IIS. Why is that?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

窗影残 2024-07-16 02:45:43

我认为基于事件与基于线程不是问题 - 它是一个非阻塞多路复用 I/O、可选择套接字、解决方案与线程池解决方案。

在第一种情况下,您将处理所有传入的输入,无论使用它的是什么 - 因此读取不会阻塞 - 单个“侦听器”。 单个侦听器线程将数据传递给不同类型的工作线程,而不是每个连接一个。 同样,写入任何数据时都不会阻塞,因此数据处理程序可以单独运行它。 因为这个解决方案主要是 IO 读/写,所以它不会占用太多 CPU 时间 - 因此您的应用程序可以利用它来做任何它想做的事情。

在线程池解决方案中,您有单独的线程处理每个连接,因此它们必须共享上下文切换的时间 - 每个线程都“监听”。 在此解决方案中,CPU + IO 操作位于同一个线程中(获得时间片),因此您最终会等待 IO 操作完成每个线程(阻塞),而传统上这可以在不使用 CPU 时间的情况下完成。

谷歌非阻塞 IO 了解更多细节,你也可能会找到一些与线程池的比较。

(如果有人可以澄清这些要点,请随意)

I think event based vs thread based is not the question - it is a nonblocking Multiplexed I/O, Selectable sockets, solution vs thread pool solution.

In the first case you are handling all input that comes in regardless of what is using it- so there is no blocking on the reads- a single 'listener'. The single listener thread passes data to what can be worker threads of different types- rather than one for each connection. Again, no blocking on writing any of the data- so the data handler can just run with it separately. Because this solution is mostly IO reads/writes it doesn't occupy much CPU time- thus your application can take that to do whatever it wants.

In a thread pool solution you have individual threads handling each connection, so they have to share time to context switch in and out- each one 'listening'. In this solution the CPU + IO ops are in the same thread- which gets a time slice- so you end up waiting on IO ops to complete per thread (blocking) which could traditionally be done without using CPU time.

Google for non-blocking IO for more detail- and you can prob find some comparisons vs. thread pools too.

(if anyone can clarify these points, feel free)

寻找我们的幸福 2024-07-16 02:45:43

事件驱动的应用程序本质上并不是更快。

来自为什么事件是一个坏主意(对于高并发服务器)

We examine the claimed strengths of events over threads and show that the
weaknesses of threads are artifacts of specific threading implementations
and not inherent to the threading paradigm. As evidence, we present a
user-level thread package that scales to 100,000 threads and achieves
excellent performance in a web server.

那是在 2003 年。当然,从那时起,现代操作系统上的线程状态已经有所改善。

编写基于事件的服务器的核心意味着在代码中重新发明协作多任务处理(Windows 3.1 风格),很可能是在已经支持适当的抢占式多任务处理的操作系统上,并且没有透明上下文切换的好处。 这意味着您必须管理堆上的状态,这些状态通常由指令指针暗示或存储在堆栈变量中。 (如果你的语言有闭包,闭包可以显着缓解这种痛苦。尝试在 C 中做到这一点就不那么有趣了。)

这也意味着你获得了协作多任务所暗示的所有警告。 如果您的某个事件处理程序由于某种原因需要一段时间才能运行,它就会停止该事件线程。 完全不相关的请求滞后。 即使是冗长的 CPU 密集型操作也必须发送到其他地方以避免这种情况。 当您谈论高并发服务器的核心时,“长时间操作”是一个相对术语,对于预计每秒处理 100,000 个请求的服务器而言,约为微秒。 我希望虚拟内存系统永远不必为您从磁盘中提取页面!

从基于事件的架构中获得良好的性能可能很棘手,特别是当您考虑延迟而不仅仅是吞吐量时。 (当然,使用线程也可能犯很多错误。并发性仍然很困难。)

对于新服务器应用程序的作者来说,有几个重要问题:

  • 线程在您今天打算支持的平台上如何执行? 他们会成为你的瓶颈吗?
  • 如果您仍然遇到一个糟糕的线程实现:为什么没有人修复这个问题?

Event-driven applications are not inherently faster.

From Why Events Are a Bad Idea (for High-Concurrency Servers):

We examine the claimed strengths of events over threads and show that the
weaknesses of threads are artifacts of specific threading implementations
and not inherent to the threading paradigm. As evidence, we present a
user-level thread package that scales to 100,000 threads and achieves
excellent performance in a web server.

This was in 2003. Surely the state of threading on modern OSs has improved since then.

Writing the core of an event-based server means re-inventing cooperative multitasking (Windows 3.1 style) in your code, most likely on an OS that already supports proper pre-emptive multitasking, and without the benefit of transparent context switching. This means that you have to manage state on the heap that would normally be implied by the instruction pointer or stored in a stack variable. (If your language has them, closures ease this pain significantly. Trying to do this in C is a lot less fun.)

This also means you gain all of the caveats cooperative multitasking implies. If one of your event handlers takes a while to run for any reason, it stalls that event thread. Totally unrelated requests lag. Even lengthy CPU-invensive operations have to be sent somewhere else to avoid this. When you're talking about the core of a high-concurrency server, 'lengthy operation' is a relative term, on the order of microseconds for a server expected to handle 100,000 requests per second. I hope the virtual memory system never has to pull pages from disk for you!

Getting good performance from an event-based architecture can be tricky, especially when you consider latency and not just throughput. (Of course, there are plenty of mistakes you can make with threads as well. Concurrency is still hard.)

A couple important questions for the author of a new server application:

  • How do threads perform on the platforms you intend to support today? Are they going to be your bottleneck?
  • If you're still stuck with a bad thread implementation: why is nobody fixing this?
ぃ弥猫深巷。 2024-07-16 02:45:43

这实际上取决于你在做什么; 对于重要的应用程序来说,基于事件的编程当然很棘手。 作为一个 Web 服务器确实是一个非常微不足道、易于理解的问题,事件驱动模型和线程模型在现代操作系统上都运行得很好。

在事件模型中正确开发更复杂的服务器应用程序通常相当棘手 - 线程应用程序更容易编写。 这可能是决定因素而不是性能。

It really depends what you're doing; event-based programming is certainly tricky for nontrivial applications. Being a web server is really a very trivial well understood problem and both event-driven and threaded models work pretty well on modern OSs.

Correctly developing more complex server applications in an event model is generally pretty tricky - threaded applications are much easier to write. This may be the deciding factor rather than performance.

秋心╮凉 2024-07-16 02:45:43

这实际上与线程无关。 它与线程用于服务请求的方式有关。 对于像 lighttpd 这样的东西,您有一个线程通过事件为多个连接提供服务。 对于旧版本的 apache,每个连接都有一个进程,该进程会在传入数据时唤醒,因此当有大量请求时,您最终会得到非常大的数字。 然而,现在 MPM apache 是基于事件的,请参阅 apache MPM 事件

It isn't about the threads really. It is about the way the threads are used to service requests. For something like lighttpd you have a single thread that services multiple connections via events. For older versions of apache you had a process per connection and the process woke up on incoming data so you ended up with a very large number when there were lots of requests. Now however with MPM apache is event based as well see apache MPM event.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文