CQRS +事件溯源的可扩展性

发布于 2024-10-31 19:56:12 字数 684 浏览 7 评论 0原文

我正在尝试在我的新项目中使用 CQRS 和 EventSorcing。我遵循 Greg Young 几年前建议的方式(Mark Nijhof 实现 - http://cre8ivethought.com/blog/2009/11/12/cqrs--la-greg-young/)。我对这个解决方案的可扩展性有一些问题。

Mark Nijhof 在这篇文章中提到了一些观点。但现在的问题是 Denormalizer 部分,它负责更新报告数据库。这部分我想异步,所以在将事件发布到总线后我想立即返回控制权。我们建议 Denormalizer 可以作为独立的 Web 服务 (WCF) 来实现,它将处理传入的事件并通过批量命令以定时方式更新报告数据库。看起来这可能是一个瓶颈,所以我们此时还想添加一些可扩展性——集群解决方案。但在集群的情况下,我们无法控制报告数据库更新的顺序(或者我们应该实现一些奇怪且我猜有错误的逻辑,它将检查报告数据库中的对象版本)。另一个问题是解决方案的可持续性:如果出现故障,我们将丢失反规范化器中的更新,只要我们不在任何地方保留它们)。所以现在我正在寻找这个问题的解决方案(反规范化器可扩展性),欢迎任何想法!

I'm trying to use CQRS and EventSorcing in my new project. I'm following the way that Greg Young suggested several years ago (Mark Nijhof implementation - http://cre8ivethought.com/blog/2009/11/12/cqrs--la-greg-young/). And I have some issues concerning scalability of this solution.

Some points were mentioned in this article by Mark Nijhof. But the problem now is the Denormalizer part, which is responsible for updating the reporting database. This part I want to make asynchronous, so after publishing events to the bus I want to return control immediately. We suggested that Denormalizer could be implemented as a standalone web service (WCF) which will process the incoming events and make updates to the report database in timing fashion with batches of commands. It seems that it could be a bottleneck, so we also want to add some scalability at this point - a cluster solution. But in case of cluster we can't control the sequence of reporting database updates (or we should implement some strange and I guess buggy logic which will check object versions in report DB). Another problem is sustainability of the solution: in case of failure we will loose updates in denormalizer, as far as we do not persist them anywhere). So now I'm lookig for solution of this problem (Denormalizer scalability) any thoughts are welcome!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

清风夜微凉 2024-11-07 19:56:12

首先,您肯定希望将反规范化器托管在单独的进程中。从那里,您可以让域将域中发生的事件发布到您的消息传递基础结构。帮助加速非规范化的一种简单策略是按消息/事件类型分解事物。换句话说,您可以为每种消息类型创建一个单独的队列,然后让反规范化器订阅(使用消息总线)相应的事件。这样做的优点是,您不必将消息一个接一个地堆叠起来——一切都开始并行运行。唯一可能发生争用的地方是监听多种类型的表。即便如此,您现在已经在许多端点之间分配了负载。

只要您使用某种消息传递基础设施,您在尝试非规范化时就不会丢失事件消息。相反,在一定次数的失败重试之后,该消息将被视为“有毒”并移至错误队列。只需监视错误队列中的问题即可。一旦消息进入错误队列,您就可以检查日志以了解其存在的原因,解决问题,然后将其移回。

另一个考虑因素是 Mark Nijhof 的例子有些陈旧。 DDD/CQRS Google 网上论坛提供了大量可用的 CQRS 框架以及大量建议。

To start, you'll definitely want to have the denormalizer hosted in a separate process. From there you can have the domain publish to your messaging infrastructure the events that occur in the domain. One easy strategy to help speed up denormalization is to break things apart by message/event type. In other words, you could create a separate queue for each message type and then have the denormalizer subscribe (using a message bus) to the corresponding events. The advantage of this is that you don't have messages stacking up one behind the other--everything starts to run in parallel. The only places where you might have some contention is on tables that listen to multiple types. Even so, you've now distributed the load among many endpoints.

As long as you're using some kind of messaging infrastructure you won't loose the event messages when attempting to denormalize. Instead, after a certain number of failure retries the message will be considered "poison" and moved to an error queue. Simply monitor the error queue for problems. Once a message is in the error queue you can check your logs to see why it's there, fix the problem, and then move it back.

One other consideration is that Mark Nijhof's example is somewhat old. There are a number of CQRS frameworks available as well as mountains of advice in the DDD/CQRS Google Group.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文