使用 RabbitMQ 获取长时间运行任务的结果

发布于 2024-11-04 19:17:17 字数 1033 浏览 5 评论 0原文

我有一个场景,客户端发送 http 请求来下载文件。该文件需要动态生成,通常需要 5-15 秒。因此,我正在寻找一种解决方案,将该操作拆分为 3 个 http 请求。

  1. 第一个请求触发文件的生成。
  2. 客户端每 5 秒轮询一次服务器,检查文​​件是否已准备好下载。
  3. 当轮询请求的响应为肯定时,客户端开始下载文件。

为了实现此目的,我正在研究消息队列解决方案,例如 RabbitMQ。它们似乎提供了一个可靠的框架来异步运行长时间运行的任务。但是,在阅读了 RabbitMQ 上的教程后,我不确定如何收到操作结果。

我的想法是这样的:

前端服务器接收来自客户端的请求,并根据需要将消息发布到 RabbitMQ。该前端服务器将有 3 个端点

/generate
/poll
/download

当客户端使用 GET 参数(如 request_uid=AAA)调用 /generate 时,前端服务器将发布一个消息发送至 RabbitMQ,负载中包含 request_uid。任何空闲的工作人员随后都会收到此消息并开始生成与 AAA 对应的文件。

客户端将使用 request_uid=AAA 不断轮询 /poll 以检查任务是否完成。

任务完成后,客户端将使用 request_uid=AAA 调用 /download 来下载文件。

问题是前端服务器的 /poll/download 处理程序将如何了解文件生成作业的状态? RabbitMQ 如何将任务结果传达回生产者。或者我必须在 RabbitMQ 之外实现这样的机制吗? (消费者将其结果放入文件/var/completed/AAA

I have a scenario where a client sends an http request to download a file. The file needs to be dynamically generated and typically takes 5-15 seconds. Therefore I am looking into a solution that splits this operation in 3 http requests.

  1. First request triggers the generation of the file.
  2. The client polls the server every 5 seconds to check if file is ready to download
  3. When the response to the poll request is positive, the client starts downloading the file

To implement this I am looking into Message Queue solutions like RabbitMQ. They seem to provide a reliable framework to run long running tasks asynchronously. However after reading the tutorials on RabbitMQ, I am not sure how will I receive the result of the operation.

Here is what I've in mind:

A front end server receives requests from clients and it posts messages to RabbitMQ as required. This front end server will have 3 endpoints

/generate
/poll
/download

When client invokes /generate with a GET parameter say request_uid=AAA, the front end server will post a message to RabbitMQ with the request_uid in the payload. Any free worker will subsequently receive this message and start generating the file corresponding to AAA.

Client will keep polling /poll with request_uid=AAA to check if task was complete.

When task is complete client will call /download with request_uid=AAA expecting to download the file.

The question is how will the /poll and /download handlers of the front end server will come to know about the status of the file generation job? How can RabbitMQ communicate the result of the task back to the producer. Or do I have to implement such mechanism outside RabbitMQ? (Consumer putting its results in a file /var/completed/AAA)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

作妖 2024-11-11 19:17:17

开始使用 AMQP 的最简单方法是使用主题交换并创建携带控制消息的队列。例如,您可以有一个 file.ready 队列,并在准备好拾取时发送带有文件路径名的消息,以及一个 file.error 队列,以在您由于某种原因无法创建文件时进行报告。然后客户端可以使用 file.generate 队列将 GET 信息发送到服务器。

The easiest way to get started with AMQP, is to use a topic exchange, and to create queues which carry control messages. For instance you could have a file.ready queue and send messages with the file pathname when it is ready to pickup, and a file.error queue to report when you were unable to create a file for some reason. Then the client could use a file.generate queue to send the GET information to the server.

欢烬 2024-11-11 19:17:17

你的最后一句话击中了要害:

(消费者将其结果放入
文件 /var/completed/AAA)

您的服务器必须协调多个作业及其工作结果。因此,您将需要某种形式的“主存储库”,其中包含已完成内容的权威记录。将完成的文件复制到特殊目录中是一种合理而简单的方法。

它也不一定需要 RabbitMQ 或任何消息传递解决方案。您的服务器可以按照自己希望的方式将工作外包给这些工作人员:通过生成进程、使用线程池,或者实际上通过生成 AMQP 事件(这些事件最终在代理中并被“工作人员”队列消费者吸收)。这取决于您的应用程序以及最适合它的方式。

You hit the nail on the head with your last line:

(Consumer putting its results in a
file /var/completed/AAA)

Your server has to coordinate multiple jobs and the results of their work. Therefore you will need some form of "master repository" which contains an authoritative record of what has been finished already. Copying completed files into a special directory is a reasonable and simple way of doing exactly that.

It doesn't necessarily need RabbitMQ or any messaging solution, either. Your server can farm out jobs to those workers any way it wishes: by spawning processes, using a thread pool, or indeed by producing AMQP events which end up in a broker and get sucked down by "worker" queue consumers. It's up to your application and what is most appropriate for it.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文