弹性负载均衡
我想知道是否有一种方法可以进行弹性负载平衡。我已经阅读过有关 HAProxy 的内容,但似乎我需要关闭 HAProxy 来重新配置它以与更多或更少的机器一起使用。
为了让图片更清晰:我有一个 Web 后端集群(比如说 apache + mod_rails)。我可以监控后端的使用情况,并在流量变得非常高时非常快速地(以秒为单位)启动另一台具有相同内容的计算机。但是,我不知道如何让 HAProxy 使用额外的后端而不重新启动它(损害可用性)。有没有办法使用 HAProxy 或其他负载均衡器来做到这一点?
我在想可能有一种方法可以使用两个负载均衡器来实现冗余。然后我可以关闭一个,更新其配置,将其恢复,然后关闭另一个。但我不知道如何做到这一点。
I am wondering if there is a way to do elastic load balancing. I have read about HAProxy but it seems I need to bring down HAProxy to reconfigure it to work with more or less machines.
To make the picture more clear: I have a cluster of web backends (lets say apache + mod_rails). I can monitor the usage of the backends and bring up another machine with the same content very quickly (on the order of seconds) if traffic gets very high. However, I don't know how to make HAProxy use the additional backends without restarting it (hurting availability). Is there a way using HAProxy or some other load balancer to do this?
I was thinking there might be a way to have two load balancers for redundancy. Then I could bring down one, update its configuration, bring it back up, and then take down the other. But I don't have a good idea about how to do this.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
如果您需要添加新服务器,则必须重新启动它,尽管当您使用“-sf $oldpid”启动新进程时几乎无法检测到,因为新进程和旧进程都在
平行线。
如果您需要暂时禁用服务器,您有多种选择:
1)(首选):启用“option http-disable-on-404”并操作服务器的检查响应以返回 404。这将禁用新连接,但会仍然允许现有用户完成他们的会话。然后你安排返回 500 并且你可以停止你的过程。这种方法的优点是您永远不必接触负载均衡器,一切都由您正在操作的服务器控制。这就是大多数明智的基础设施的做法。
2)简单的一个:使用socat,连接到统计套接字并禁用您打算使用的服务器:
然后在完成后启用它:
只要您不修改配置,就没有理由重新启动,即使它
仍未被发现。
If you need to add new servers, you have to restart it, though it's almost undetectable when you start the new process with "-sf $oldpid", as both the new and old process work in
parallel.
If you need to temporarily disable a server, you have several options :
1) (the preferred one) : enable "option http-disable-on-404" and manipulate your server's check response to return 404. This will disable new connections but will still allow existing users to finish their session. Then you arrange to return 500 and you can stop your process. The advantage of this method is that you never have to touch the LB, everything is controlled from the server you're operating on. This is how most of sensible infrastructures do it.
2) the easy one : using socat, connect to the stats socket and disable the server you intend to work on :
then enable it once you're finished :
As long as you're not modifying the config, there's no reason to restart, even if it
remains undetected.
如果您有预算,请访问 www.Zeus.com。几周后发布的新版本提供了开箱即用的功能,但您也可以使用现有版本来提供服务级别监控,然后使用脚本语言和 API 为后端服务器创建自动配置功能。
免费评估与开发人员许可证一样可用,因此您可以免费建模您想要实现的目标。
如果您想在某个时候沿着这条路线走下去,许多云提供商也可以提供 Zeus 软件。
If you have a budget for this then look at www.Zeus.com. The new release, out in a few weeks offers this out of the box, but you can also use the existing release to provide service level monitoring and then use a scripting language and API to create the auto-provisioning functionality for your back-end servers.
free evals are available as is a developer licence so you can model what you are trying to acheive at no cost.
Zeus software is also available from a number of Cloud Providers if you want to go down that route at some point.