聊天服务器 - 每次轮询的持久 TCP 或新连接
对于需要维护活动用户列表的可扩展服务器的最佳实践是什么?
- 我应该为服务器发送更新消息的每个客户端打开一个持久的 TCP 连接吗? 这可能会导致许多开放连接,并且可能会在很多秒内没有流量。这是 TCP 的问题吗?
- 或者让客户端定期轮询更新(每个新的 tcp 连接)会更好吗?
聊天服务器或大型在线游戏如何处理这个问题?
Whats the best practice for scalable servers which need to maintain a list of active users?
- Should I open a persistent TCP Connection for each client on which the server sends update messages?
This could lead in many open connection and propably no traffic for many seconds. Is this a problem in TCP? - Or would it be better to let the Client poll updates periodically (with a new tcp connection each)?
How do Chat Servers or large Online Games handle this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
就我个人而言,我会为每个客户端选择一个持久的 TCP 连接,以避免 a) 创建和销毁连接的额外工作以及涉及的所有 TCP 数据包中涉及的额外延迟 b) 避免在任一客户端上的 TIME_WAIT 中创建大量套接字客户端或服务器。根本没有充分的理由来创建和破坏连接。
根据您的平台,可能有各种技巧来处理当您打开大量连接(我的意思是数十万个连接)时可能遇到的各种特定于平台的问题。例如,在 Windows 上,使用重叠的 I/O 和 I/O 完成端口对于大量连接来说是一个很好的设计,如果您的连接通常大部分时间都处于空闲状态,那么您可能会发现使用“零字节读取”技巧允许您在更少的硬件上处理更多的连接;但是,一旦您知道由于等待读取很少完成的缓冲区空间量而出现问题,您就可以添加它。
我不会让客户端轮询服务器。这是低效的。当有数据可用时,让服务器将数据发布到客户端。这将允许服务器通过决定向客户端发送数据的频率来控制工作负载 - 它可以在每次新数据可供客户端使用时发送,也可以在批量处理一些数据并等待一小会儿后发送如果服务器正在推送数据,那么服务器(弱点,可能被客户端需求淹没的地方)对其需要执行的工作有更多的控制权。
如果每个客户端都轮询,那么 a) 当每个客户端发送一条消息询问服务器是否有应该发送的内容时,您会产生更多的网络噪音,b) 您会根据服务器的需要生成更多的工作回应民意调查。服务器知道何时有客户端的数据,让它负责告诉客户端。
Personally I'd go for a single persistent TCP connection per client to avoid a) the additional work in creating and destroying connections and the additional latency involved in all the TCP packets involved and b) to avoid creating lots of sockets in TIME_WAIT on either the clients or the server. There's simply no good reason to create and destroy the connections.
Depending on your platform there may be various tricks to deal with the various platform specific problems that you might get when you have lots of connections open, and by lots I mean 10s of thousands. For example, on Windows, using overlapped I/O and I/O completion ports would be a good design for lots of connections and if your connections are generally idle most of the time then you might find that using the 'zero byte read' trick would allow you to handle more connections on lesser hardware; but it's something you can add once you know you have a problem due to the amount of buffer space that you have waiting for reads which only complete infrequently.
I wouldn't have the clients polling the server. It's inefficient. Have the server publish data to the clients as and when there is data available. This would allow the server to control the workload somewhat by letting it decide how often to send the data to the clients - it could either send every time new data became available for a client or send after it had batched up some data and waited a short while, etc. If the server is pushing the data then the server (the weak point, the place that might get overwhelmed by client demand) has more control over the work that it will need to do.
If you have each client polling then a) you're generating more network noise as each client sends a message to ask the server if it has anything that it should send it and b) you're generating more work for the server as it needs to respond to the polls. The server knows when there's data for the client, let it be responsible for telling the clients.