CQRS中的可用性和原子更新

发布于 2025-02-05 01:57:53 字数 474 浏览 4 评论 0原文

我正在尝试使用以下组件来实现CQRS设计:

数据库 - > debezium(cdc) - > kafka ----> kafka流(读取查看tive updater) - >阅读查看

数据库可以是mysql或postgresql,and kafka,and kafka&kafka&kafka&kafka&kafka&kafka& kafka流可以充当事件处理器/读取视图Updater,在CDC事件上进行必要的转换并更新读取视图。

  • 如何通过以下配置实现高可用性? (我听说CDC流源DB或Debezium本身降低后停止了)
  • 是否可以精确地/至少在事件处理后实现一次?如果发生故障,则可以获取重复消息,这可能会导致同一数据存储在读取方面。如何在CQRS配置中实现IDEMPOTENCE怎么办?
  • 是否有人使用不同的架构建议或技术堆栈来实现具有高可用性和原子更新的CQR,至少一次/至少一次是一条消息处理?

I am trying to implement CQRS design using the following components:

Database —>Debezium (CDC) —>Kafka —->Kafka Stream(read view updater) —>Read View

The database can be MySQL or PostgreSQL, and Kafka Streams could act as event processor/read view updater which does the necessary transformation on the cdc events and update the read view.

  • How to achieve high availability with the following configuration? (I heard CDC streams stops once the source DB or Debezium itself goes down)
  • Is it possible to achieve exactly once/at least once event processing? In case of failure it is possible to get duplicate message which might lead to same data storing in read side again. What can be done to achieve idempotence in CQRS configuration?
  • Does anyone have a different architectural suggestion or technology stack for implementing CQRS with high availability and atomic update using exactly once/at least once message handling?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

手长情犹 2025-02-12 01:57:53

当源数据库停止时,CDC将停止。 CDC停止不会影响阅读视图的可用性。

在一般情况下,不可能对读取视图的完全更新(几乎可以肯定是在乎)。如果可以从Kafka中消耗的流并更新读取视图以原子能提交消息的偏移,以作为更新读取视图的一部分,则可以保证确切的更新。至于Debezium可能发布重复的变更记录(可能会这样:IDEMPOTENT PRODUCTION AFAIK尚未进入),具体取决于DB,更改记录可能在有效载荷中的字段之前具有。该字段可用于验证该更改是从与读取视图相对应的状态(而忽略不适用的更改)应用的。

您可能会发现,如果写入模型是事件基础与现场更新,则更容易地实现对读取模型的IDEMTOTENT投影:这些事件通常具有每个实体序列编号,该序列号可以使“有效地”(有效地符合)(与愿意消费者至少一开始)更容易。

CDC will stop when the source DB stops. CDC stopping will not affect the read view's availability.

Exactly-once updating of the read view (which is what one almost certainly cares about) is not possible in the general case. If it's possible for the stream consuming from Kafka and updating the read view to atomically commit the offset of the message as part of updating the read view, then an exactly-once update can be guaranteed. As far as Debezium potentially publishing duplicate change records (which it may do: idempotent production AFAIK has not yet made it in), depending on the DB the change records may have a before field in the payload. This field can be used to validate that the change is applied from a state which corresponds to the read view (and ignore inapplicable changes).

You may find it easier to implement an idempotent projection to a read model if the write model is event-sourced vs. update-in-place: the events will typically have a per-entity sequence number which can make "effectively-once" (at-least-once with an idempotent consumer) easier.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文