可以水平扩展的服务上的分布式操作序列
我有一个微服务分布式操作序列。服务 A 需要告诉服务 B 做某事,一旦完成,它将告诉服务 C。顺序很重要,所以我使用 saga 模式,如您所见。
我的问题是服务 B 可以扩展,并且每个实例都需要接收消息并完成操作。该操作必须在每个服务 B 实例上发生。那么服务 C 仅应在所有服务 B 实例完成其任务后运行。
这是必须在每个实例上进行的缓存清除。我无法控制此架构,因此服务 B 的缓存耦合到每个实例。如果可以的话,我将为实例提供一个共享缓存。
我提出了这个编排解决方案,但它需要维护状态和大量额外的代码来处理我想避免的边缘情况。
- 服务 A 向它知道的所有服务 B 实例发送相同的消息,
- 所有服务 B 实例都将成功发送给服务 A
- 在最终服务 B 成功时,服务 A 向服务 C 发送消息
有更好的替代方案吗?
I have a microservice distributed sequence of action. Service A needs to tell service B to do something and once that is complete it will tell service C. The sequence is important so I'm using the saga pattern as you can see.
My issue is that service B can scale and each instance needs to receive the message and complete the action. The action must happen on every service B instance. Then service C should only run once all the service B instances have completed their task.
It is a cache purge that must happen on each instance. I have no control over this architecture so the cache for service B is coupled to each instance. I would have a shared cache for the instances if I could.
I have come up with this orchestration solution but it requires maintaining state and lots of extra code to handle edge cases which I would like to avoid.
- service A sends the same message to all service B instances which it knows about
- all service B instances send success to service A
- On the final service B success, service A messages service C
Is there a better alternative to this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
假设您无法重新架构服务 B,您已经捕获了操作的基本复杂性:A 将必须跟踪服务 B 的实例,并且必须处理大量边缘情况。该过程基本上是有状态的。
如果缓存清除命令是幂等的(即您不关心它是否在过程中发生多次),您可以简化一些边缘情况处理,并且可以摆脱不太持久的状态(发生故障时,您可以从开始,而不需要重建您在此过程中的位置)。
Assuming that you can't rearchitect service B, you've captured the essential complexity of the operation: A will have to track instances of service B and will have to deal with a ton of edge cases. The process is fundamentally stateful.
If the cache purge command is idempotent (i.e. you don't care if it happens multiple times in the process) you can simplify some of the edge case handling and can get away with the state being less durable (on failure you can start from the beginning instead of needing to reconstruct where you were in the process).