使用 NGINX 和 Puma 部署的 Rails 应用程序的 Auto Scaling Group 最佳配置

发布于 2025-01-14 05:54:03 字数 522 浏览 2 评论 0原文

我正在将 Amazon Auto Scaling 组用于使用 NGINX 和 Puma 部署在 EC2 实例上的 Rails 应用程序。我在配置 Auto Scaling 策略时面临一些挑战。

我使用 r5.xlarge 作为托管我的玉米作业的主实例,使用 r5.large 作为自动缩放实例。我当前的缩放触发器是在 50% CPU 上定义的,但显然,由于以下原因,这不起作用

  1. 由于主实例有 4 个 CPU,因此总体消耗不会达到 50%,除非有一些正在运行的玉米作业正在消耗所有资源。
  2. 即使CPU使用率达到50%,Rails应用程序的启动时间也是30-40秒,在此期间,服务器收到的所有请求都返回503。
  3. 如果CPU消耗低于50%,但系统接收大量并发请求它不会启动新实例并开始返回 503 或响应时间显着增加。

我尝试将自动缩放组从 CPU 消耗更改为请求数量,但实例的启动时间问题仍然普遍存在,有时甚至在不需要时启动新实例。

您是否曾经遇到过 Rails 部署方面的任何此类问题,或者您认为开箱即用的任何方法?

I am using the Amazon Auto Scaling group for Rails application deployed on an EC2 instance using NGINX and Puma. I am facing some challenges with the configuring of the Auto Scaling policy.

I am using r5.xlarge for the main instance that is hosting my corn jobs and r5.large for the autoscaling instance. My current scaling trigger is defined on the 50% CPU but apparently, that does not work due to the following reasons

  1. Since the main instance has 4 CPUs the overall consumption did not hit 50% unless there is some corn job running that is consuming all resources.
  2. Even if the CPU will hit 50% the startup time of rails application is 30-40 seconds and in the meantime, all requests received by the server returns 503.
  3. If the CPU consumption is less than 50% but the system receives a lot of concurrent requests it does not start a new instance and either start returning 503 or the response time increases significantly.

I have tried changing the auto-scaling group from CPU consumption to the number of requests but the start time issue of instance still prevails and sometimes it starts a new instance when it is not even needed.

Have you ever faced any such issue with Rails deployment, anything that you thinks worked for your out of the box?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

欢烬 2025-01-21 05:54:03

我们正在 ECS 任务中使用 PUMA 运行 Ruby 应用程序,但应该会遇到与 EC2 完全相同的问题。

由于 Ruby 是单线程的,因此运行 PUMA 服务器的 Ruby 进程一次只会使用一个 CPU。如果你有 4 个 CPU,我想一个 PUMA 进程永远不会使整台机器的饱和度超过 25%。

注意:另请查看有关 PUMA 线程数量的配置。这对于配置也很重要,因为您正在进行自动缩放,您的应用程序需要能够使其正在使用的 CPU 饱和,以便能够启动。如果 Puma 线程太少,情况就不会如此,如果 Puma 线程太多,情况就不会如此。您的应用程序将变得不稳定,这是需要微调的事情。

建议:

  1. 为您选择的 EC2 类提供的每个 CPU 运行一个 PUMA 进程,每个 PUMA 服务器侦听不同的端口,让您的负载均衡器对其进行管理。这应该允许您的机器在饱和期间达到潜在的 100% CPU(理论上),从而允许基于 CPU 的自动缩放工作
  2. 首选解决方案:选择具有 1 个 CPU 的较小机器,因此每台机器只需要运行一个 PUMA 服务器。

根据我使用 ECS 的经验,Ruby 和其他单线程语言不应使用超过 1 (v)CPU 的机器,如果有必要,您应该真正进行大量水平扩展(我们的一些服务正在运行 50 个 ECS 实例)。

希望这有帮助。

We are running Ruby application with PUMA in ECS Tasks, but should be quite the same problematic that with EC2.

Since Ruby is single threaded, your Ruby Process running your PUMA server is only going to use one CPU at a time. If you have 4 CPU, I imagine one PUMA process will never manage to saturate more than 25% of the overall machine.

Note: Also have a look at your configuration regarding the number of PUMA Threads. This is also critical to configure, since you are doing auto-scaling, your application NEED to be able to saturate the CPU it's using, to be able to kick in. With too few Puma Thread it will not be the case, with too much your application will become unstable, this is something to fine tune.

Recommendation:

  1. Run one PUMA process per CPU you have available with the EC2 class you have chosen, each PUMA server listening on a different port, have your load-balancer manage that. This should allow your machine to reach potentially 100% CPU during saturation (in theory), allowing auto-scaling base on CPU to work
  2. Preferred solution: Pick smaller machines, with 1 CPU, so you only need to run one PUMA server per machine.

From my experience with ECS, Ruby and other single threaded languages should not use more than 1 (v)CPU machines, and you should instead really on heavy horizontal scaling if necessary (some of our service are running 50x ECS instances).

Hope this helps.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文