如何对 TCP 流量进行负载平衡?

发布于 2024-12-27 20:06:33 字数 261 浏览 2 评论 0原文

我正在尝试确定如何负载平衡 TCP 流量。我了解 HTTP 负载平衡的工作原理,因为它是一个简单的请求/响应架构。但是,我不确定当您的服务器尝试将数据写入其他客户端时如何负载平衡 TCP 流量。我附上了一个简单 TCP 聊天服务器的工作流程图像,我们希望在 N 个应用程序服务器之间平衡流量。是否有任何负载均衡器可以完成我想做的事情,或者我是否需要研究不同的主题?谢谢。

在此处输入图像描述

I'm trying to determine how to load balance TCP traffic. I understand how HTTP load balancing works because it is a simple Request / Response architecture. However, I'm unsure of how you load balance TCP traffic when your servers are trying to write data to other clients. I've attached an image of the work flow for a simple TCP chat server where we want to balance traffic across N application servers. Are there any load balancers out there that can do what I'm trying to do, or do I need to research a different topic? Thanks.

enter image description here

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

怪我太投入 2025-01-03 20:06:33

首先,您的图表假设负载均衡器充当 (TCP) 代理,但情况并非总是如此。通常使用直接路由(或直接服务器返回),或执行目标 NAT。在这两种情况下,后端服务器和客户端之间的连接都是直接的。因此,在这种情况下,本质上是 TCP 握手分布在后端服务器之间。有关详细信息,请参阅以下内容:

显然TCP代理做存在(HAProxy 是其中之一),在这种情况下,代理管理连接的两侧,因此您的应用程序需要能够通过传入的 IP/端口(恰好来自代理而不是客户端)来识别客户端)。代理将处理将消息返回给客户端。

无论哪种方式,它都归结为应用程序设计,因为我认为棘手的一点是拥有一个公共会话存储(某种数据库,或 key=>value 存储,例如 Redis),这样当您的应用程序服务器说“我需要向 Frank 发送消息”,它可以确定 Frank 连接到哪个后端服务器(来自 DB),并通知该服务器向其发送消息。您可以通过持久连接(所有负载均衡器都可以做到这一点)或使用本质上持久的东西(如 websocket)来减少连接(来自同一客户端)在不同后端服务器之间移动的问题。

这可能过于简单化了,因为我没有使用聊天软件的经验。显然,数据库服务器本身可以分布在多台机器上,以实现容错和负载平衡。

Firstly, your diagram assumes that the load balancer is acting as a (TCP) proxy, which is not always the case. Often Direct Routing (or Direct Server Return) is used, or Destination NAT is performed. In both cases the connection between backend server and the client is direct. So in this case it is essentially the TCP handshake that is distributed amongst backend servers. See the following for more info:

Obviously TCP proxies do exist (HAProxy being one), in which case the proxy manages both sides of the connecton, so your app would need to be able to identify the client by the incoming IP/Port (which would happen to be from the proxy rather than the client). The proxy will handle getting the messages back to the client.

Either way, it comes down to application design as I would imagine the tricky bit is having a common session store (a database of some kind, or key=>value store such as Redis), so that when your app server says "I need to send a message to Frank" it can determine which backend server Frank is connected to (from DB), and signal that server to send it the message. You reduce the problem of connections (from the same client) moving around different backend servers by having persistent connections (all load balancers can do this), or by using something intrinsically persistent like a websocket.

This is probably a vast oversimplification as I have no experience with chat software. Obviously DB servers themselves can be distributed amongst several machines, for fault-tolerance and load balancing.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文