对在客户端实现服务器负载共享的简单方法的看法
我正在考虑服务器的客户端共享,可以通过客户端执行来完成,而服务器端很少或没有帮助,并想出了这个 -
如果这听起来很愚蠢,请原谅我,但是我走了 -
- 服务器有一张桌子它存储服务器名称、服务器 IP 地址和相应 IP 地址 MAC ID 的字段(是的,这是一种非常以 Windows 为中心的方法)
- 每次客户端登录到主服务器时,它都会向服务器发送一个查询,该查询返回所有服务器条目的 ip 地址及其表中各自的 MAC ID(假设单个服务器有多个 rsynced 副本),
- 然后客户端对每个 ip 地址实现跟踪路由,并将它们按以下顺序存储在数组中:增加跳数。
- 循环访问该数组并实施 arp 将 IP 地址解析为 MAC ID。然后将这些 MAC ID 与步骤 1 中从主服务器获取的 MAC ID 进行比较。如果存在匹配,则根据跳数计数 + MAC IDS 的匹配数 + 表示负载的度量标准来选择要连接的服务器(在该时刻到该服务器的连接)。
欢迎对这个想法听起来如何发表意见。
I was thinking about client sharing for a server that could be accomplished by client side execution with little or no help from the severs end, and came up with this -
Pardon me if it sounds silly, bu here I go -
- The server has a table which stores the fields of server names, server ip addresses and the corresponding Ip addresses MAC ID's (Yes, this is a very Windows centric approach)
- Every time a client logs on to the main server it send out a query to the server which returns the ip address of all the server entries asn their respective MAC ID's it has in its table(im assuming that a single server has multiple rsynced copies)
- The client then implements a traceroute to each of these ip addresses and stores them in an array in order of increasing hopcount.
- Iterate through this array and implement arp to resolve the IP addresses into MAC ID's. Then compare these MAC IDs with those fetched from the main server at step 1. IF theres a match then selection of the server to connect to is based on the number of hop counts+match of MAC IDS+ a metric that signifies the load(number of connections to that server at that instant in time).
Opinions about how this idea sounds are welcome.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我不认为负载共享可以完全基于跳数。为什么需要 MAC 地址?如果一台服务器可以快速到达但非常繁忙,那么它可能不是最佳选择。也许您可以向主服务器添加一项服务来跟踪每个服务器的客户端数量并将其用作标准......但这听起来很像旧的、简单的负载平衡。
I don't think load sharing could be based exclusively on hop count. And why would you need the MAC addresses? If one server is fast to reach but mighty busy it may not be the best choice. Maybe you might add a service to the main server to keep track of the number of clients for every server and use that as criterion... but that sounds a lot like the old, simple and plain load balancing.