将http请求/响应模型与异步队列连接

发布于 2024-07-12 00:13:22 字数 810 浏览 4 评论 0原文

将同步 http 请求/响应模型与基于异步队列的模型连接起来的好方法是什么?

当用户的 HTTP 请求到来时,它会生成一个进入队列的工作请求(在本例中为 beanstalkd) 。 一名工作人员接收请求、完成工作并准备回复。

队列模型不是请求/响应——只有请求,没有响应。 所以问题是,我们如何最好地将响应返回到 HTTP 世界并返回给用户?

想法:

  1. Beanstalkd 支持轻量级主题或队列(他们称之为管道)。 我们可以为每个请求创建一个管道,让工作人员在该管道上创建一条消息,然后让 http 进程坐在管道上等待响应。 我不太喜欢这个,因为它有 apache 进程占用内存。

  2. 让 http 客户端轮询响应。 用户的初始 HTTP 请求启动队列中的作业并立即返回。 客户端(用户的浏览器)定期轮询以获取响应。 在后端,worker 将其响应放入 memcached,然后我们将 nginx 连接到 memcached,以便轮询是轻量级的。

  3. 使用Comet。 与第二个选项类似,但使用更高级的 http 通信来避免轮询。

我倾向于 2,因为它很简单并且众所周知(我还没有使用过 comet)。 我猜可能还有一个我没有想到的更好、明显的模型。 你怎么认为?

What's a good way to connect the synchronous http request/response model with an asynchronous queue based model?

When the user's HTTP request comes it generates a work request that goes onto a queue (beanstalkd in this case). One of the workers picks up the request, does the work, and prepares a response.

The queue model is not request/response - there are only requests, not responses. So the question is, how best do we get the response back into the world of HTTP and back to the user?

Ideas:

  1. Beanstalkd supports light weight topics or queues (they call them tubes). We could create a tube for each request, have the worker create a message on that tube, and have the http process sit and wait on the tube for the response. Don't particularly like this one since it has apache processes sitting around taking memory.

  2. Have the http client poll for the response. The user's initial HTTP request kicks off the job on the queue and returns immediately. The client (the user's browser) polls periodically for a response. On the backend the worker puts its response into memcached, and we connect nginx to memcached so the polling is light weight.

  3. Use Comet. Similar to the second option, but with fancier http communication to avoid polling.

I'm leaning towards 2 since it's easy and well know (I haven't used comet yet). I'm guessing there's probably also a much better obvious model I haven't thought of. What do you think?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

执手闯天涯 2024-07-19 00:13:23

这是如何在 JMS 上高效实现请求响应 这可能会有所帮助(尽管以 Java/JMS 为中心)。 总体思路是为每个客户端/线程创建一个临时队列,然后使用correlationID 将请求与回复等相关联。

Here's how to implement request-response efficiently on JMS which might be helpful (though Java/JMS centric). The general idea is to create a temporary queue per client/thread then use correlationIDs to correlate requests to replies etc.

滿滿的愛 2024-07-19 00:13:23

轮询是简单的解决方案; comet 是更有效的解决方案。 你已经搞定了:)

我个人很喜欢comet(尽管我有偏见,因为我帮助编写了WebSync),它可以很好地让您的客户端订阅频道并在服务器进程准备就绪时获取消息。 像冠军一样工作。

Polling is the simple solution; comet is the more efficient solution. You've got it nailed :)

I personally love comet (although I'm biased, since I helped write WebSync), it nicely lets your clients subscribe to a channel and get the message when your server process is ready. Works like a champ.

月棠 2024-07-19 00:13:23

我正在寻求实现一个 Beanstalkd 和 memcached 系统来根据请求运行多个进程 - 在本例中,在用户登录时查找信息(例如用户正在等待的消息数)。 该信息存储在 Memcached 中,然后在下一个页面加载时读回。

如果不了解更多关于您正在执行的任务,就很难说出需要做什么或如何做。 然而,选项 #2 是最简单的,这可能就是您所需要的 - 取决于您要向工人推送什么。

I'm looking to implement a Beanstalkd and memcached system to run a number of processes following a request - in this case, looking up information when a user logs in (the number of messages a user has waiting for example). The info is stored in Memcached and then read back on the next page load.

Without knowing a little more about what tasks you are doing though, it's not so easy to say what needs to be done, or how. Option #2 is however the simplest, and that may be all you need - depending on what you are pushing back into the workers.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文