推迟阻止 Rails 请求

发布于 2024-11-15 23:19:18 字数 390 浏览 3 评论 0原文

我发现一个问题解释了 Play Framework 的await() 机制如何在 1.2 中工作 。本质上,如果您需要做一些会阻塞一段可测量的时间的事情(例如发出缓慢的外部http请求),您可以暂停您的请求并释放该工作人员以在阻塞时处理不同的请求。我猜一旦您的阻止操作完成,您的请求就会被重新安排以继续处理。这与在后台处理器上安排工作然后让浏览器轮询完成不同,我想阻止浏览器而不是工作进程。

不管我对 Play 的假设是否准确,是否有一种技术可以在 Rails 应用程序中执行此操作?我想人们可以认为这是一种长轮询的形式,但除了“使用节点”之外,我没有找到关于该主题的太多建议。

I found a question that explains how Play Framework's await() mechanism works in 1.2. Essentially if you need to do something that will block for a measurable amount of time (e.g. make a slow external http request), you can suspend your request and free up that worker to work on a different request while it blocks. I am guessing once your blocking operation is finished, your request gets rescheduled for continued processing. This is different than scheduling the work on a background processor and then having the browser poll for completion, I want to block the browser but not the worker process.

Regardless of whether or not my assumptions about Play are true to the letter, is there a technique for doing this in a Rails application? I guess one could consider this a form of long polling, but I didn't find much advice on that subject other than "use node".

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

人疚 2024-11-22 23:19:18

我有一个类似的问题,关于长请求会阻止工作人员接受其他请求。这是所有网络应用程序的问题。即使 Node.js 也可能无法解决工作进程消耗过多时间的问题,或者可能会耗尽内存。

我开发的一个 Web 应用程序有一个 Web 界面,该界面向 Rails REST API 发送请求,然后 Rails 控制器必须请求一个 Node REST API,该 API 运行大量耗时的任务来获取一些数据。从 Rails 到 Node.js 的请求可能需要 2-3 分钟。

我们仍在尝试寻找不同的方法,但也许以下内容可能适合您,或者您可以适应一些想法,我也很想得到一些反馈:

  1. 前端使用生成的标识符 [A] 向 Rails API 发出请求同一个会话。 (此标识符有助于识别来自同一用户会话的先前请求)。
  2. Rails API 将前端请求和标识符 [A] 代理到 Node.js 服务
  3. Node.js 服务将此作业添加到队列系统(例如 RabbitMQ 或 Redis),消息包含标识符 [A]。 (这里你应该根据自己的场景来思考,同时假设一个系统会消费队列作业并保存结果)

  4. 如果再次发送相同的请求,根据需求,您可以使用以下命令杀死当前作业相同的标识符[A]并安排/排队最新的请求,或者忽略等待第一个请求完成的最新请求,或者其他决定适合您的业务需求。

  5. 前端可以发送间隔REST请求来检查标识符[A]的数据处理是否已完成,那么这些请求是轻量且快速的。

  6. 一旦 Node.js 完成工作,您可以使用消息订阅系统或等待下一个检查状态请求并将结果返回给前端。

您还可以使用负载均衡器,例如 Amazon 负载均衡器 Haproxy。 37signals 有一篇关于使用 Haproxy 卸载某些负载的博客文章和视频长时间运行的请求不会阻塞较短的请求。

Github 使用类似的策略来处理生成提交/贡献可视化的长请求。他们还设定了拉动时间的限制。如果时间太长,Github会显示一条消息,说时间太长,已被取消。

对于排队时间较长的任务,YouTube 有一条很好的消息:“这比预期的时间要长。您的视频已排队,将尽快处理。”

我认为这只是一种解决方案。您还可以查看 EventMachine gem,它有助于提高性能、处理并行或异步请求。

由于此类问题可能涉及一项或多项服务。考虑提高这些服务(例如数据库、网络、消息协议等)之间性能的可能性,如果缓存可能有帮助,请尝试缓存频繁的请求,或预先计算结果。

I had a similar question about long requests that blocks workers to take other requests. It's a problem with all the web applications. Even Node.js may not be able to solve the problem of consuming too much time on a worker, or could simply run out of memory.

A web application I worked on has a web interface that sends request to Rails REST API, then the Rails controller has to request a Node REST API that runs heavy time consuming task to get some data back. A request from Rails to Node.js could take 2-3 minutes.

We are still trying to find different approaches, but maybe the following could work for you or you can adapt some of the ideas, I would love to get some feedbacks too:

  1. Frontend make a request to Rails API with a generated identifier [A] within the same session. (this identifier helps to identify previous request from the same user session).
  2. Rails API proxies the frontend request and the identifier [A] to the Node.js service
  3. Node.js service add this job to a queue system(e.g. RabbitMQ, or Redis), the message contains the identifier [A]. (Here you should think about based on your own scenario, also assuming a system will consume the queue job and save the results)

  4. If the same request send again, depending on the requirement, you can either kill the current job with the same identifier[A] and schedule/queue the lastest request, or ignore the latest request waiting for the first one to complete, or other decision fits your business requirement.

  5. The Front-end can send interval REST request to check if the data processing with identifier [A] has completed or not, then these requests are lightweight and fast.

  6. Once Node.js completes the job, you can either use the message subscription system or waiting for the next coming check status Request and return the result to the frontend.

You can also use a load balancer, e.g. Amazon load balancer, Haproxy. 37signals has a blog post and video about using Haproxy to off loading some long running requests that does not block shorter ones.

Github uses similar strategy to handle long requests for generating commits/contribution visualisation. They also set a limit of pulling time. If the time is too long, Github display a message saying it's too long and it has been cancelled.

YouTube has a nice message for longer queued tasks: "This is taking longer than expected. Your video has been queued and will be processed as soon as possible."

I think this is just one solution. You can also take a look EventMachine gem, that helps to improve the performance, handler parallel or async request.

Since this kind of problem may involve one or more services. Think about possibility of improving performance between those services(e.g. database, network, message protocol etc..), if caching may help, try out caching frequent requests, or pre-calculate results.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文