在同一豆荚的复制品之间同步通信的共同策略是什么?

发布于 2025-01-18 22:14:32 字数 476 浏览 1 评论 0原文

假设我们有以下应用程序,

  • API 应用程序:负责服务用户请求。
  • 后端应用程序:负责处理长期运行任务的用户请求。它将进度更新到数据库(postgres)和分布式缓存(Redis)。

这两个应用程序都是可扩展的服务。单个后端应用程序可处理多个租户,例如此处的客户,但一名客户仅分配给单个后端应用程序。

我有一个用例,我需要 API 层连接到处理该客户的特定副本。我们对此有一个共同的模式吗?

考虑到的

  1. Pub/Sub 策略很少:问题是我们想要同步保证响应,可能使用 Redis
  2. gRPC:使用 POD IP 连接到特定 pod 不是标准方法,
  3. 通过向副本添加标签并使用这些标签来在运行时创建服务。 -- 看起来很有希望

请让我知道是否有通用模式或示例架构或执行此操作的标准方法?

注意:[以上是生产用例的模拟,名称和实际用例有所更改]

Lets say we have following apps ,

  • API app : Responsible for serving the user requests.
  • Backend app: Responsible for handling the user requests which are long running tasks. It updates the progress to database (postgres) and distributed cache (Redis).

Both apps are scalable service. Single Backend app handles multiple tenants e.g. Customer here but one customer is assigned to single backend app only.

I have a usecase where I need API layer to connect to specific replica which is handling that customer. Do we have a common Pattern for this ?

Few strategies in mind

  1. Pub/Sub: Problem is we want sync guranteed response , probably using Redis
  2. gRPC : Using POD IP to connect to specific pod is not a standard way
  3. Creating a Service at runtime by adding labels to the replicas and use those. -- Looks promising

Do let me know if there is common pattern or example architecture of this or standard way of doing this?

Note :[Above is a simulation of production usecase, names and actual use case is changed]

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

笔芯 2025-01-25 22:14:32

您应该致力于保持服务无状态,在 Kubernetes 环境中,无法得知一个 pod 何时可能因工作节点维护而被另一个 pod 替换。

如果您有长时间运行的任务,无法在工作节点耗尽/疏散期间关闭 Pod 的配置宽限期内完成,您需要实现某种持久工作队列,正如您在选项 1 中所考虑的那样。我建议您研究一下传奇模式。

我们通常采用的另一种模式是让工作服务将作业的当前状态写入数据库,并让客户端每隔几秒提取一次状态。然而,这确实需要某种方式来处理半完成的作业,这些作业可能会被强制关闭的 Pod 放弃。

You should aim to keep your services stateless, in a Kubernetes environment there is no telling when one pod might be replaced by another due to worker node maintenance.

If you have long running task that cannot be completed during the configured grace period for pods to shutdown during a worked node drain/evacuation you need to implement some kind of persistent work queue as your are think about in option 1. I suggest you look into the saga pattern.

Another pattern we usually employ is to let the worker service write the current state of the job into the database and let the client pull the status every few seconds. This does however require some way of handling half finished jobs that might be abandoned by pods that are forced to shutdown.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文