即时通讯服务器设计

发布于 2024-10-12 00:58:07 字数 465 浏览 3 评论 0原文

假设我们有一个即时消息应用程序,基于客户端-服务器,而不是 p2p。实际的协议并不重要,重要的是服务器架构。所述服务器可以被编码为使用非阻塞套接字以单线程、非并行模式运行,根据定义,这允许我们立即(或立即)有效地执行读写等操作。非阻塞套接字的这一特性允许我们在服务器的核心使用某种选择/轮询功能,并且在实际的套接字读/写操作中几乎不浪费时间,而是花时间处理所有这些信息。据我了解,如果编码正确,这可以非常快。但是还有第二种方法,那就是积极地使用多线程,创建一个新线程(显然使用某种线程池,因为该操作在某些平台和某些情况下可能(非常)慢),并让这些线程并行工作,而主后台线程处理accept()和其他东西。我已经在网络上的各个地方看到过这种方法的解释,所以它显然确实存在。

现在的问题是,如果我们有非阻塞套接字、即时读/写操作以及简单、易于编码的设计,为什么还会存在第二种变体?我们试图通过第二种设计(即线程)克服哪些问题?据我所知,这些通常用于解决一些缓慢且可能阻塞的操作,但那里似乎不存在此类操作!

Let's suppose we have an instant messaging application, client-server based, not p2p. The actual protocol doesn't matter, what matters is the server architecture. The said server can be coded to operate in single-threaded, non-parallel mode using non-blocking sockets, which by definition allow us to perform operations like read-write effectively immediately (or instantly). This very feature of non-blocking sockets allows us to use some sort of select/poll function at the very core of the server and waste next to no time in the actual socket read/write operations, but rather to spend time processing all this information. Properly coded, this can be very fast, as far as I understand. But there is the second approach, and that is to multithread aggressively, creating a new thread (obviously using some sort of thread pool, because that very operation can be (very) slow on some platforms and under some circumstances), and letting those threads to work in parallel, while the main background thread handles accept() and stuff. I've seen this approach explained in various places over the Net, so it obviously does exist.

Now the question is, if we have non-blocking sockets, and immediate read/write operations, and a simple, easily coded design, why does the second variant even exist? What problems are we trying to overcome with the second design, i.e. threads? AFAIK those are usually used to work around some slow and possibly blocking operations, but no such operations seem to be present there!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

情愿 2024-10-19 00:58:07

我假设您不是在谈论每个客户端都有一个线程,因为这种设计通常出于完全不同的原因,而是一个线程池,每个线程处理多个并发客户端。

该架构与单线程服务器相比的原因只是为了利用多个处理器。您所做的工作不仅仅是 I/O。你必须解析消息,做各种工作,甚至可能运行一些更重量级的加密算法。这一切都需要CPU。如果您想要扩展,利用多个处理器将使您能够扩展得更多,和/或使每个客户端的延迟更低。

这种设计的一些好处可能会被您在多线程环境中可能需要更多锁定的事实所抵消,但如果做得正确,并且当然取决于您正在做什么,这可能是一个巨大的胜利 - 代价是更加复杂。

此外,这可能有助于克服操作系统的限制。内核中的 I/O 路径可能会在处理器之间更加分散。并非所有操作系统都完全能够从单线程应用程序中线程化 IO。在过去,并没有所有可以替代旧的 *nix select() 的方法,旧的 *nix select() 的字段描述符限制通常为 1024,并且一旦您告诉它监视太多套接字,类似的 API 就会开始严重降级。将所有这些客户端分布在多个线程或进程上有助于克服这一限制。

至于线程之间的 1:1 映射,实现该架构有几个原因:

  • 更简单的编程模型,这可能会导致更容易发现错误,并且更快地实现。

  • 支持阻塞 API。这些到处都是。让一个线程处理许多/所有客户端,然后继续对数据库进行阻塞调用将会使每个人都停顿。即使读取文件也可能会阻塞您的应用程序,并且您通常无法监视 IO 事件的常规文件句柄/描述符 - 或者即使可以监视,编程模型通常也异常复杂。

这里的缺点是它无法扩展,至少不能扩展最广泛使用的语言/框架。拥有数千个本机线程会损害性能。尽管有些语言在这里提供了更轻量级的方法,例如 Erlang 和某种程度上的 Go。

I'm assuming you're not talking about having a thread per client as such a design is usually for completely diffreent reasons, but rather a pool of threads each handles several concurrent clients.

The reason for that arcitecture vs a single threaded server is simply to take advantage of multiple processors. You're doing more work than simply I/O. You have to parse the messages, do various work, maybe even run some more heavyweight crypto algorithms. All this takes CPU. If you want to scale, taking advantage of multiple processors will allow you to scale even more, and/or keep the latency even lower per client.

Some of the gain in such a design can be a bit offset by the fact you might need more locking in a multithreaded environment, but done right, and certainly depening on what you're doing, it can be a huge win - at the expense of more complexity.

Also, this might help overcome OS limitations . The I/O paths in the kernel might get more distributed among the procesors. Not all operating systems might fully be able to thread the IO from a single threaded applications. Back in the old days there were'nt all the great alternatives to the old *nix select(), which usually had a filedesciptor limit of 1024, and similar APIs severly started degrading once you told it to monitor too many socket. Spreading all those clients on multiple threads or processes helped overcome that limit.

As for a 1:1 mapping between threads, there's several reasons to implement that architecture:

  • Easier programming model, which might lead to less hard to find bugs, and faster to implement.

  • Support blocking APIs. These are all over the place. Having a thread handle many/all of the clients and then go on to do a blocking call to a database is going to stall everyone. Even reading files can block your application, and you usually can't monitor regular file handles/descriptors for IO events - or when you can, the programming model is often exceptionally complicated.

The drawback here is it won't scale, atleast not with the most widely used languages/framework. Having thousands of native threads will hurt performance. Though some languages provides a much more lightweight approach here, such as Erlang and to some extent Go.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文