分布式应用服务器

发布于 2024-10-31 17:37:56 字数 1263 浏览 6 评论 0原文

我有一个应用程序服务器。在较高级别上,此应用程序服务器具有用户。用户是一个或多个组的一部分,服务器让所有用户了解其组及其组中其他用户的状态。主要有三个功能:

  • 更新和广播与用户及其组相关的元数据;例如,用户登录,服务器更新该用户的状态,并将其广播给该用户所在的所有在线用户。组。
  • 充当两个或多个用户之间的代理;客户端利用点对点传输的优势,但在两个用户无法直接相互连接的情况下,服务器将充当他们之间的代理。
  • 为离线用户存储数据;如果客户端需要向不在线的用户发送一些数据,服务器会将该数据存储一段时间,然后在用户下次到来时发送在线的。

我正在尝试修改此应用程序,以允许它分布在多个服务器上,而不必全部位于同一本地网络上。但是,我有一个要求,不能破坏对旧客户端的向后兼容性;本质上,分发需要对客户端透明。

我遇到的最大问题是处理连接到服务器 A 的用户进行需要广播给服务器 B 上的用户的更新的情况。

推而广之,一个更大的问题是当服务器 A 上的用户需要服务器充当他们与服务器 B 上的用户之间的代理时。

我最初的想法是尝试为每个用户分配一个首选服务器,使用某种算法来考虑他们需要与哪些用户进行通信。这可以减少可能需要与其他服务器上的用户通信的用户数量。

然而,这只能最大限度地减少不同服务器上的用户需要进行通信的频率。我仍然存在实现不同服务器上的用户之间通信的问题。

我能想到的唯一解决方案是当服务器需要处理连接到不同服务器的用户时,让服务器相互连接。

例如,如果我连接到服务器 A,并且我需要一个代理来让另一个用户连接到服务器 B,我会询问服务器 A用于与该用户的代理连接。 服务器 A 会看到其他用户已连接到服务器 B,因此它将与服务器 B 建立“中继”连接。此连接只会将我的请求转发到服务器 B 并将响应转发给我。

这样做的问题是,它会增加带宽的使用,而带宽的使用已经非常高了。不幸的是,我没有看到任何其他解决方案。

对于这个问题有任何众所周知的或更好的解决方案吗?对于分布式系统来说,不同服务器上的用户之间进行通信的需求似乎并不常见。

I have an application server. At a high level, this application server has users and groups. Users are part of one or more groups, and the server keeps all users aware of the state of their groups and other users in their groups. There are three major functions:

  • Updating and broadcasting meta-data relating to users and their groups; for example, a user logs in and the server updates this user's status and broadcasts it to all online users in this user's groups.
  • Acting as a proxy between two or more users; the client takes advantage of peer-to-peer transfer, but in the case that two users are unable to directly connect to each other, the server will act as a proxy between them.
  • Storing data for offline users; if a client needs to send some data to a user who isn't online, the server will store that data for a period of time and then send it when the user next comes online.

I'm trying to modify this application to allow it to be distributed across multiple servers, not necessarily all on the same local network. However, I have a requirement that backwards compatibility with old clients cannot be broken; essentially, the distribution needs to be transparent to the client.

The biggest problem I'm having is handling the case of a user connected to Server A making an update that needs to be broadcast to a user on Server B.

By extension, an even bigger problem is when a user on Server A needs the server to act as a proxy between them and a user on Server B.

My initial idea was to try to assign each user a preferred server, using some algorithm that takes which users they need to communicate with into account. This could reduce the number of users who may need to communicate with users on other servers.

However, this only minimizes how often users on different servers will need to communicate. I still have the problem of achieving the communication between users on different servers.

The only solution I could come up with for this is having the servers connect to each other, when they need to deal with a user connected to a different server.

For example, if I'm connected to Server A and I need a proxy with another user connected to Server B, I would ask Server A for a proxy connection to this user. Server A would see that the other user is connected to Server B, so it would make a 'relay' connection to Server B. This connection would just forward my requests to Server B and the responses to me.

The problem with this is that it would increase bandwidth usage, which is already extremely high. Unfortunately, I don't see any other solution.

Are there any well known or better solutions to this problem? It doesn't seem like it's very common for a distributed system to have the requirement of communication between users on different servers.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

深海里的那抹蓝 2024-11-07 17:37:56

我不知道你对现有服务器的修改有多大的灵活性。很久以前我这样做的方法是让所有服务器保持彼此开放的 TCP 连接。我使用了 UDP 广播,它告诉其他服务器彼此的信息,并允许它们连接到新服务器并删除停止发送广播的服务器。

然后,每次用户连接到服务器时,该服务器都会单播 TCP 消息到它连接到的所有服务器,并且所有服务器都会保留用户列表以及他们所在的服务器。

然后,正如您所建议的,如果您收到从一个用户到另一台服务器上的另一用户的消息,您必须将其中继到另一台服务器。服务器确实需要位于同一 LAN 上才能正常工作。

您可以在线程中运行服务器到服务器的通信,并实际模拟用户位于同一服务器上。

然而,维护用户列表和发送消息很容易出现竞争条件(例如,当您将消息从一台服务器中继到另一台服务器时,用户会掉线等)。

维护服务器代码是一场噩梦,这实际上不是实现可扩展服务器的最有效方法。但是,如果您必须使用旧服务器代码库,那么您实际上没有太多选择。

如果您可以考虑使用支持远程进程和节点的语言,例如 Erlang。

另一种方法可能是使用消息队列系统(如 RabbitMQ 或 ActiveMQ),并让服务器通过该系统相互通信。这些系统被设计为可扩展的,并且通常采用发布/订阅机制。

I don't know how much flexibility you have in modifying the existing server. The way I did this a long time ago was to have all the servers keep a TCP connection open to each other. I used a UDP broadcast which told the other servers about each other and allowed them to connect to new servers and remove servers that stopped sending the broadcast.

Then everytime a user connects to a server that server Unicasts a TCP message to all the servers it is connected to, and all the servers keeps a list of users and what server they are on.

Then as you suggest if you get a message from one user to another user on another server you have to relay that to the other server. The servers really need to be on the same LAN for this to work well.

You can run the server to server communications in a thread, and actually simulate the user being on the same server.

However maintaining the user lists and sending messages is prone to race conditions (like a user drops off while you are relaying the message from one server to another etc).

Maintaining the server code was a nightmare and this is really not the most efficient way to implement scalable servers. But if you have to use the legacy server code base then you really do not have too many options.

If you can look into using a language that supports remote processes and nodes like Erlang.

An alternative might be to use a message queue system like RabbitMQ or ActiveMQ, and have the servers talk to each other through that. Those system are designed to be scalable, and usually work off a Publish/Subscribe mechanism.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文