Azure Service Bus,使用过滤器组装一条大消息,分解为较小的消息
我正在尝试寻找在 Azure 服务总线上接收大消息的解决方案。我想到的基本模式是分部分发布一条大消息——以及相关 ID、页面和“of”。
因此,如果我有一条由四部分组成的消息,它们都将具有相同的相关 ID,每个消息的“of”为 4,页面将为 0 - 3。该集合将作为批次发布。
监听器可以只监听page为0的消息,然后根据事务id拉取剩余的消息。
发布这些消息非常容易。 ServiceBusMessage 有一个 CorrelationId 字段和一个名为 ApplicationProperties 的字典字段,我可以将自定义“page”和“of”字段添加到其中。我可以在发布之前将它们组装成 ServiceBusMessageBatch。
我不确定如何接收消息。我正在使用 Function Apps,因此设置侦听器很容易。
[FunctionName("GeneralLogger")]
public static void Run([ServiceBusTrigger("queueName", Connection = "AzureWebJobsServiceBus")] string myQueueItem, ApplicationProperties ap, ILogger log)
{ /// process message }
但我不知道如何在这里过滤。另外,我可以通过向消息处理器添加处理程序来提取消息,如下所述:https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-get-started-with-queues 但同样我也不知道如何进行过滤。
我看到的唯一 Azure 服务总线过滤是在主题和订阅之间进行的。那里有很多功能,但在运行时我无法动态设置任何内容。
我觉得我要么试图错过某些东西,要么重新发明轮子。还有其他人使用 Azure 服务总线做类似的事情吗?
I'm trying to find a solution for receiving large messages on Azure Service Bus. The essential pattern I was thinking is to publish a large messages in parts -- along with a correlation id, a page, and an "of".
So if I have a four-part message, they would all have the same correlation id, each would have an "of" of 4, and the page would be 0 - 3. The set would be published as a batch.
The listener could listen for only messages with a page of 0, and then pull the remaining messages according to the transaction id.
Publishing these messages is easy enough. ServiceBusMessage has a CorrelationId field, and a dictionary field called ApplicationProperties that I can add my custom "page" and "of" fields to. I can assemble them into a ServiceBusMessageBatch before publishing.
What I'm not sure about is how to receive the messages. I'm using Function Apps, so it's easy to setup a listener.
[FunctionName("GeneralLogger")]
public static void Run([ServiceBusTrigger("queueName", Connection = "AzureWebJobsServiceBus")] string myQueueItem, ApplicationProperties ap, ILogger log)
{ /// process message }
But I don't see how to filter here. Also, I can pull messages by adding a handler to the message processor, described here: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-get-started-with-queues But likewise I don't see how to filter.
The only Azure Service Bus filtering I see how to do is between a topic and subscription. There is a lot of capability there, but nothing dynamically I can set during runtime.
I feel like I'm either trying to miss-use something or re-inventing the wheel. Is anyone else doing something like this with Azure Service Bus?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
解决方案已经存在。这是Azure服务总线。能够以大小为100MB的消息发送。它带有价格。假设您想吐出文件,因为溢价要花很多钱,或者是因为消息可能大于100MB,那么索赔检查模式是必不可少的。当高级层上使用索赔检查模式时,只有一个问题 - 当消息是事件时,您无法确定性清理,并且有多个接收器。鉴于这些斑点是大斑点,您需要提出一些策略来清理这些斑点,并且会随着时间的流逝而迅速增加存储消耗,具体取决于流过系统的消息数量。使用高级层,不存在清理问题。您也不需要提供存储帐户。因此,如果您的大信息不会超过100MB,则可能是您生产环境的更合适的解决方案。
A solution is already there. It's Azure Service Bus premium tier. Capable of sending messages up to 100MB in size. It comes with a price. Assuming you're looking to spit up the file either because the premium is much to pay for or because messages could be larger than 100MB, the claim-check pattern is the way to go. There's just one issue when the claim-check pattern is used over the premium tier - you cannot have a deterministic clean-up when a message is an event, and there are multiple receivers. You'd need to come up with some policy to clean up those blobs, given that those are large blobs and will quickly add to the storage consumption over time, depending on the number of messages flowing through the system. With the premium tier, the problem of clean-up doesn't exist. Nor do you have to provide a storage account. Therefore, if your large messages will not exceed 100MB, it could be a more suitable solution for your production environment.
不可能在队列上应用过滤器;他们仅对主题/订阅进行操作。
一般来说,当您想要发送对于单个消息来说太大的有效负载。简而言之,您可以将有效负载写入某种形式的持久存储,然后您的服务总线消息将为消费者提供位置。
使用
Azure.Messaging.ServiceBus
包的示例实现可以在 此示例。It isn't possible to apply filters on a queue; they only operate on topics/subscriptions.
Generally, the Claim Check pattern is recommended when you're looking to send a payload too large for a single message. In a nutshell, you would write your payload to some form of durable storage and then your Service Bus message would provide the location for consumers.
An example implementation using the
Azure.Messaging.ServiceBus
package can be found in this sample.