如何设计客户端服务器架构师

发布于 2024-10-09 13:35:08 字数 204 浏览 12 评论 0原文

我想知道服务器(基于TCP)架构支持大规模客户端(至少10K)来实现Fix服务器。我的观点是 我们如何设计它。 如何监听开放端口?使用选择或轮询或任何其他功能。 如何处理客户端的响应?大规模时,我们无法为每个客户端创建一个线程。 响应的处理应该在不同的可执行文件中进行,并通过 IPC 将请求和响应共享给服务器可执行文件。 还有更多关于它的内容。如果有人解释或提供任何链接,我将不胜感激。 谢谢

I like to know the server (TCP based) architecture to support large scale of clients(at least10K) to implement Fix server. My points are
How we design it.
How to listen on the open port? Use select or poll or any other function.
How to process the response of the client? On large scale we cannot create the one thread for each client.
Should the processing of response is in the different executable and share the request and response to the server executable through IPC.
There is much more on it. I would appreciate if anyone explains it or provide any link.
Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

最冷一天 2024-10-16 13:35:08

有关此主题的信息的绝佳资源是C10K 问题。虽然那里的尺寸看起来有点古老,但这些技术今天仍然适用。

An excellent resource for information on this topic is The C10K problem. Although the dimensions there seem a little old, the techniques are still applicable today.

我们只是彼此的过ke 2024-10-16 13:35:08

该架构取决于您想要对客户端传入数据执行的操作。我的猜测是,对于每条传入的消息,您都会执行一些计算,并且可能还会返回响应。

在这种情况下,我将创建 1 个主侦听器线程来接收所有传入消息(实际上,如果您的硬件有超过 1 个物理网络设备,我将为每个设备使用一个侦听器线程,并确保每个设备正在侦听特定设备) 。
获取计算机上的 CPU 数量,为每个 CPU 创建工作线程,并将每个线程绑定到一个 cpu(也许工作线程的数量应该是 num_of_cpu-1,以便为侦听器和调度程序留下可用的 cpu)。

每个线程都有一个队列和信号量,主侦听器线程只是将传入数据推送到这些队列中。进行负载均衡的方式有很多种(稍后会讲到)。

每个工作线程只处理提供给它的请求,并将响应放入调度程序读取的另一个队列中。

调度程序 - 这里有 2 个选项,使用一个线程作为调度程序(或者使用每个网络设备的线程作为侦听器),或者让调度程序实际上与侦听器使用相同的线程。
将它们放在同一个线程上有一些优点,因为它可以更轻松地检测丢失的套接字连接并使用相同的 fd 进行读取和写入,而无需线程同步。然而,使用 2 个不同的线程可能会提供更好的性能,这需要进行测试。

关于负载均衡的注意事项:
这本身就是一个话题。
最简单的事情是为所有工作线程使用 1 个队列,但问题是它们必须锁定才能弹出项目,并且锁定会损害性能。 (但你会得到最平衡的负载)。

另一种非常简单的方法是为每个工作线程创建一个私有队列,并在插入时执行循环。每 X 个周期后检查所有队列的大小。如果某些队列比其他队列大得多,则将它们保留在接下来的 X 个周期中,然后再次重新检查它们。这不是最好的方法,但实现起来很简单,并且在不需要锁定的情况下提供了一些负载平衡。

顺便说一句 - 有一种方法可以在两个线程之间实现队列而不阻塞 - 但这也是另一个主题。

我希望它有帮助,
盖伊

The architecture depends on what you want to do with the clients incoming data. My guess is that for every incoming message you would perform some computations and probably also return a response.

In that case I would create 1 main listener thread that receives all the incoming messages (Actually, if your hardware has more than 1 physical network device, I would use a listener thread per device and make sure each one is listening to a specific device).
Get the number of CPUs that you have on your machine and create worker threads for each CPU and bind them each thread to one cpu (Maybe number of working thread should be num_of_cpu-1, to leave an availalbe cpu for the listener and dispatcher).

Each thread has a queue and semaphore, the main listener thread just push the incoming data into those queues. There are many way to perform load balancing (Will talk about it later).

Each working thread just works on the requests given to it, and put the response on another queue that is read by the dispatcher.

The dispatcher - there are 2 options here, use a thread for dispatcher (or thread per network device as for listeners), or have the dispatcher actually be the same thread as the listener.
There is some advantage to put them both on the same thread, since it makes it easier to detect lost socket connection and use the same fds for both reading and writing without thread synchronization. However, it could be that using 2 different threads would give better performance, it need to be tested.

Note about load balancing:
This is a topic of its own.
The simplest thing is to use 1 queue for all working threads, but the problem is that they have to lock in order to pop items and the locking can damage performance. (But you get the most balanced load).

Another quite simple approach would be to have a private queue for every worker and perform round-robin when inserting. After every X cycles check the size of all the queues. If some queues are much larger than others then leave them out for the next X cycles and then recheck them again. This is not the best approach, but a simple one to implement and gives some load balancing while no locking is needed.

By the way - There is a way to implement queue between 2 threads without blocking - but this is also another topic.

I hope it helps,
Guy

小梨窩很甜 2024-10-16 13:35:08

如果客户端和服务器位于安全网络上,则安全性方面将是最小的 - 传输被加密。如果客户端和服务器不在安全网络上 - 您首先希望服务器和客户端相互验证,然后启动加密数据传输。对于数据传输,服务器端身份验证就足够了。在此身份验证结束时,使用会话密钥生成加密的数据流(对称)。考虑使用 TFTP,它实施起来很简单,并且扩展性相当好。

If the client and server are on a secure network then the security aspect is to be minimal - to the extent that the transfers are encrypted. If the clients and the server are not on a secure network - you first want the server and client to authenticate each other and then initiate encrypted data transfer. For data transfer, server-side authentication should suffice. At the end of this authentication use the session key to generate encrypted data stream (symmetric). consider using TFTP it is simple to implement and scales reasonably well.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文