PHP 负载平衡的最佳方法是什么
所以我正在使用 nginx 为我们的 Web 应用程序设置负载平衡配置。我很可能会使用粘性会话来避免负载平衡设置上的会话问题,甚至可能使用数据库会话处理程序。
然而,我现在有两个问题:
1:当从 SVN 部署时(我们使用 beanstalk ),它当然会部署到一台机器上,如何跨所有 Web 服务器进行部署?
2:我使用 S3 来存储用户文件,但是我确实保留了本地副本,以防 S3 出现故障(就像几天前那样),在所有 Web 服务器上同步这些用户文件的最佳方法是什么?
任何指示将不胜感激。
SO i am in the process of setting up a load balanced configuration for our web application with nginx. I would most probably go with sticky sessions to avoid session issues on the load balanced setup or maybe even go with a database session handler.
However there are 2 concerns that i have right now:
1 : When deploying from SVN ( we use beanstalk ), it would ofcourse deploy to one machine, how to go about deploying across all web servers?
2 : I am using S3 to store user files, however i do keep a local copy incase S3 goes down (like it did a few days ago), what would be the best approach to sync these user files across all web server?
Any pointers would be appreciated.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
配置
所以您不会使用负载平衡,而是会考虑负载分配?
不。
如果做得正确,负载平衡意味着服务丢失的可能性会随着节点数量的增加而呈指数级降低。假设单个节点的概率为 0.05(即 95% 正常运行时间),则丢失两个节点的概率为 0.05 x 0.05 = 0.0025(99.75% 正常运行时间)。 OTOH,如果您按照建议分割负载,那么每当一个节点发生故障时,您就会失去 1/N 的可用性,而失去一个节点的概率为 N*0.05,因此您只能通过 2 个节点获得 96.75% 的可用性。
关于跨多个节点的部署,我过去的做法是:
1)取一个节点,称之为node1,离线
2) 将release应用到node1
3)验证部署是否成功
4)使node1重新上线
5)使node2离线
6)从节点1到节点2的rsync
7)再次运行rsync以检查它是否已完成
8) 使节点 2 重新上线
然后对每个附加节点重复 5-8
上面的方法适用于部署 - 对于用户提交的数据,您需要在提交时分发内容。我为此使用自定义脚本。如果更新发生时节点处于离线状态,则可以在使其再次可用之前重新同步(步骤 6+7)。
我使用的脚本向节点发送请求,请求它从请求的发起者复制 - 因此它可以在较短的超时时间内运行并保证源内容可用。
在实现负载平衡方面,尽管您可以花很多钱购买复杂的硬件,但出于多种原因,我还没有看到任何比循环法更好的方法,尤其是故障转移是在客户端透明地实现的。
HTH
C.
OK
So you're not going with load balancing, you're looking at load splitting?
Don't.
Done properly, load balancing means that your chances of a loss of service are reduced exponentially by the number of nodes. Say the probaility of an individual node is 0.05 (i.e. 95% uptime), then the probability of losing both nodes is 0.05 x 0.05 = 0.0025 (99.75% uptime). OTOH if you split the load as you suggest, then you lose 1/N of your availability whenever a node fails, and the probability of losing a node is N*0.05, so you're only getting 96.75% availability with 2 nodes.
Regarding deployments across multiple nodes, the way I used to do it was to:
1) take a node, call it node1, offline
2) apply release to node1
3) verify that the deployment was successful
4) bring node1 back online
5) take node2 offline
6) rsync from node1 to node2
7) run rsync again to check it had completed
8) bring node 2 back online
then repeat 5-8 for each additional node
The method above is for deployments - for user submitted data you need to distribute the content at the time it is submitted. I use custom scripts for this. In the event that a node is offline when the update occurs, it can be resynched (steps 6+7) before making it available again.
The scripts I used sent a request to a node requesting that it copy from the originator of the request - so it could run with short timeouts and guaranteed that the source content was available.
In terms of implementing the load balancing - although you can spend lots of money buying sophisticated hardware, I've yet to see anything which works better than round-robin for lots of reasons - not least that the failover is implemented transparently at the client.
HTH
C.
通过主动-主动前端设置和粘性会话,您可以将其中一台服务器退出轮换,等待会话清除,在将所有流量切换到第一台服务器之前升级该服务器,等待第二台服务器上的会话清除服务器,然后将其从轮换中取出,升级并将其添加回轮换中。这样您将获得干净的升级,而不会损失服务。如果您使用共享会话状态,您可能可以跳过等待会话清除的过程,但如果这对您很重要,请确保在生产中执行此操作之前已在测试台上完成此操作,并且要非常小心涉及到的升级会话存储。
过去,我使用的系统在每个前端 Web 服务器上都有复制的 NFS 共享,这使我们能够在它们之间共享适合您的 S3 缓存的数据。我不确定 ISP 是如何设置的,但我们从来没有遇到过问题,即使其中一台服务器出现磁盘故障。
With an active-active front end setup and sticky sessions you can take one of the servers out of the rotation, wait for sessions to clear, upgrade that server before switching all traffic to this first server, wait for the sessions to clear on the second server, then take it out of the rotation, upgrade that and add it back to the rotation. This way you will get a clean upgrade with no loss of service. If you are using a shared session state you can probably skip waiting for the sessions to clear but make sure you've done this on a test bed before doing it in production if this is important to you and be very careful of upgrades that touch the session storage.
In the past I've used a system that had a replicated NFS share on each of the front end web servers that allowed us to share data between them which would be suitable for your S3 cache. I'm not sure exactly how it was set up by the ISP but we never had a problem with it, even when one of the servers suffered a disk failure.