将 Post 或 PostAndAsyncReply 与 F# 的 MailboxProcessor 一起使用吗?
我见过不同的片段演示 Put
消息,该消息使用 F# 的 MailboxProcessor
返回 unit
。在某些情况下,仅使用 Post
方法,而另一些则使用 PostAndAsyncReply
,一旦消息被处理,回复通道就会立即回复。在进行一些测试时,我发现等待回复时存在明显的时间滞后,因此看来除非您需要真正的回复,否则应该使用 Post
。
注意:我开始在另一个线程中提出这个问题,但认为作为完整问题发布很有用。在另一个线程中,Tomas Petricek 提到回复通道可以使用等待机制来确保调用者延迟,直到处理 Put
消息。
使用 PostAndAsyncReply
是否有助于消息排序,或者只是强制暂停,直到处理第一条消息?就性能而言,Post
似乎是正确的解决方案。准确吗?
更新:
我只是想到了BlockingQueueAgent
示例:使用 Scan
当队列已满时查找 Get
消息,因此您不想在上一个 Put< 之前先
Put
然后再 Get
/code> 已完成。
I've seen different snippets demonstrating a Put
message that returns unit
with F#'s MailboxProcessor
. In some, only the Post
method is used while others use PostAndAsyncReply
, with the reply channel immediately replying once the message is being processed. In doing some testing, I found a significant time lag when awaiting the reply, so it seems that unless you need a real reply, you should use Post
.
Note: I started asking this in another thread but thought it useful to post as a full question. In the other thread, Tomas Petricek mentioned that the reply channel could be used a wait mechanism to ensure the caller delayed until the Put
message was processed.
Does using PostAndAsyncReply
help with message ordering, or is it just to force a pause until the first message is processed? In terms of performance Post
appears the right solution. Is that accurate?
Update:
I just thought of a reason why PostAndAsyncReply
might be necessary in the BlockingQueueAgent
example: Scan
is used to find Get
messages when the queue is full, so you don't want to Put
and then Get
before the previous Put
has completed.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我的建议是设计您的系统,以便您可以尽可能多地使用
Post
。该技术专为异步并发而设计,其目标是“即发即忘”消息。等待响应的想法直接违背了这一点。
My advice is to design your system so you can use
Post
as much as possible.This technology was designed for asynchronous concurrency where the objective is to fire-and-forget messages. The idea of waiting for a response goes directly against the grain of this.
我想我总体上同意您的总结 -
PostAndAsyncReply
比Post
慢是有道理的,因此,如果调用者不需要在代理收到通知时收到通知,操作(例如将值放入队列)完成后,它肯定应该公开一种仅使用Post
来执行此操作的方法。事实上,PostAndAsyncReply
慢很多,这可能意味着某些代理应该公开这两个选项并让调用者决定。关于BlockingQueueAgent(或者我用来实现单处缓冲区的类似例子)的具体示例,代理的典型应用是解决消费者-生产者问题。在消费者-生产者问题中,我们希望当队列已满时阻塞生产者,当队列为空时阻塞消费者。 .NET
BlockingCollection
仅支持同步阻塞,这有点糟糕(即它可以阻塞整个线程池)。使用使用
PostAndAsyncReply
发送Put
消息的BlockingQueueAgent
,我们可以等待,直到元素被异步添加到队列中(因此它会阻塞生产者,但不阻塞线程!)典型用法的一个示例是 图像处理我前段时间写的管道。以下是其中的一个片段:此循环重复从
loadedImages
队列中获取图像,进行一些处理并将结果写入scaledImages
。使用队列的阻塞(读取和写入时)控制并行性,以便管道的步骤并行运行,但如果管道无法以所需的速度处理它们,则不会继续加载越来越多的图像。I think I generally agree with your summary - it makes sense that
PostAndAsyncReply
is slower thanPost
, so if the caller doesn't need to get a notification from the agent when the operation (such as putting value into the queue) completes, it should definitely expose a way to do that using justPost
. The fact thatPostAndAsyncReply
is a lot slower probably means that some agents should expose both options and let the caller decide.Regarding the specific example of
BlockingQueueAgent
(or a similar one that I used to implement one-place buffer), the typical application of the agent is to solve the consumer-producer problem. In consumer-producer problem, we want to block the producer when the queue is full and block the consumer when it is empty. The .NETBlockingCollection
supports only synchronous blocking, which is a bit bad (i.e. it can block the whole thread pool).The using the
BlockingQueueAgent
that sends thePut
messsage usingPostAndAsyncReply
, we can wait until the element is added to the queue asynchronously (so it blocks the producer, but without blocking threads!) An example of typical usage is the image processing pipeline that I wrote some time ago. Here is one snippet from that:This loop repeatedly gets an image from the
loadedImages
queue, does some processing and writes the result toscaledImages
. The blocking using the queue (both when reading and when writing) controls the parallelism, so that the steps of pipeline run in parallel, but it does not keep loading more and more images if the pipeline cannot handle them at the required speed.