通过 API 以编程方式上下扩展rackspacecloud 服务器

发布于 2024-09-12 22:10:58 字数 174 浏览 7 评论 0原文

我今天与 Rackspace 技术支持人员交谈,寻找一种简单的解决方案来根据负载扩展/缩减我的服务器,他说这可以通过他们的 API 以编程方式完成。

以前是否有人真正这样做过,或者对如何最好地解决这个问题有任何建议?在我深入研究并从头开始重写之前,我很想知道是否有人有一些大纲代码或注释。

谢谢! 沃克

I was speaking with Rackspace tech support today looking for a simple solution to scale my server up / down based on load and he said that it could be done through their API programatically.

Has anyone actually done this before or have any advice on how to best approach this? I'd love to know if someone has some outline code or notes before I dive in and rewrite it from scratch.

Thanks!
Walker

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

栀梦 2024-09-19 22:10:58

Walker,我建议您提前准备服务器,然后使用监控解决方案触发的脚本启动和停止它们。一旦您能够以自动化方式一致部署质量认可的服务器,您仍然需要大约 15 到 20 分钟来创建服务器。因此,无论哪种方式,您都需要在需要时准备好资源。

一旦您拥有了可供您使用的服务器库,就可以准备您的监控解决方案了。 Nagios 可以很好地完成这项任务。任何可以通过触发器等响应事件的监控解决方案都可以工作。

有几种扩展方法,了解如何管理利用率。

利用率

对于我们来说,这是该项目所独有的,它是系统负载/每秒请求数 + IO 的聚合度量。至少要考虑平均负载。在我们的场景中,我们想要了解是什么让我们的系统变得更加繁忙,并制定了我们自己的利用率措施。我们将其插入自定义监控解决方案中。何时应该扩大或缩小的利用措施。

扩展

涉及扩展到更大的服务器来处理请求,它的字面意思是为了处理请求,您必须迁移到更大的服务器。或者另一种思考方式是,如果请求在更大的服务器上提供服务,则请求的成本将会降低。

根据我的经验,短期内扩大规模的需求会减少。如果
您始终需要最低规格的服务器来处理负载
那么您应该会看到平均利用率水平增长。一旦
利用率水平始终保持在 60% 左右,是启动时间
扩大规模。

扩展的成本可能很高,因此如果出现负载峰值,您最好只是向池中添加另一台服务器,这就是扩展的工作原理。

横向扩展

对于大多数项目来说,短期内横向扩展更为常见,该过程涉及向环境添加更多主机并使用负载均衡器分发请求。当利用率水平达到 60% 或更高时,监控解决方案中的触发器将触发启动主机的请求。当负载返回到中值时,监控解决方案将关闭服务器。它应该是自动的,并且在关闭服务器时利用率水平应该增加。我们尝试将环境利用率保持在 40% 的中位数。

复杂之处在于自动配置负载均衡器以查看新主机。我知道有人只是预先配置平衡器以使用运行状况测量,即使服务器关闭后也是如此。负载均衡器不会向已失效的主机提供流量。当服务器启动时,负载平衡器应该再次看到它并开始自动向服务器提供请求。

最终解决方案

部署最小可行环境并设置监控以监视您自己的利用率水平。创建在您选择的环境中启动服务器的触发器。触发器应执行一个请求,触发对 Rackspace 的调用并启动服务器。这是一个好的开始。

希望这对您有所帮助,并帮助您继续构建一个成功的环境。

Walker, what I would recommend to get you started is to prepare the servers in advance and then start and stop them using scripts fired off by a monitoring solution. Once you can consistently deploy quality approved servers in a automated way you would still need about 15 to 20 minutes to create a server. So either way you will need the resources to be ready when you need them.

Once you have your arsenal of servers at your beckoning, its time to prepare your monitoring solution. Nagios will work just fine for this task. Any monitoring solution that can respond to events with triggers etc will work.

There are a few ways to scale, understanding how to manage utilisation.

Utilization

This is unique to the project for us its a aggregated measure of System Load / Requests per second + IO. At the very least consider the load average. In our scenario we wanted to understand what made our systems busier and worked out our own utilization measures. Which we plugged into a custom monitoring solution. The utilization measures when we should scale up or out.

Scaling Up

Involves scaling to a larger server to serve requests, it literally means in order to server requests you have to migrate to larger servers. Or another way of thinking of it is the cost of a request would be reduced if it where served on a larger server.

In my experience the need to scale up is reduced in the short term. If
you consistently require a minimum specification server to handle load
then you should see average utilization levels grow. Once the
utilization levels are around 60% consistently its time to start
scaling up.

Scaling up can be costly so if you have peaks in load, you are probably better off just adding another server to the pool, thats how Scaling Out works.

Scaling Out

For most projects scaling out is more common in the short term, the process involves adding more hosts to a environment and distributing requests using a load balancer. When utilisation levels reach 60% or more a trigger in your monitoring solution fires a request that starts a host. When load returns to the median the monitoring solution switches servers off. It should be automatic and in switching servers off the utilization levels should increase. We try to maintain 40% utilization as a median for the environment.

The complexity is to automate the configuration of your load balancer to see the new hosts. I know of people that just preconfigure the balancer to use the health measure even after a server is switched off. The load balancer will not serve traffic to a dead host. When the servers start up the load balancer should see it again and begin to automatically serve requests to the server.

Final Solution

Deploy a minimum viable environment and set-up monitoring to watch for your own utilisation levels. Create triggers that start servers in your chosen environment. The triggers should execute a request that fires a call to Rackspace and starts a server. This is a good start.

Hope this has been helpful for you, and you go on to build a successful environment.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文