部署通过Kafka通信的微服务
我有一个使用Kafka通信的微服务系统建造的系统。在部署过程中应该如何创建主题?
我看到了两个选项:
I。集中主题创建。拥有一个中心位置(存储库),团队添加其微服务所需的主题。在这种方法中,部署将如下:
1. Deploy kafka
2. Deploy kafka topics artifact that contains the required topics for all microservices
3. Deploy microservices
II。每个服务都部署了自己的主题。部署看起来如下:
1. Deploy Kafka
2. Deploy microservices. Within each microservice deployment the required topics are created.
我在选项I中看到了值。我可以看到部署的所有主题,分区,保留和压实策略。这将有助于了解KAFKA资源分配及其配置。该选项的缺点是潜在的耦合,我需要在运行系统上部署单个微服务。在这种情况下,我需要部署新版本的Kafka主题伪像和微服务本身。
这里最好的做法是什么?
I have a system built of microservices communicating using Kafka. How should the topics be created during deployment?
I see two options:
I. Centralized topics creation. Have a central place (repository) where teams add the topics required by their microservices. In this approach, the deployment will look as follows:
1. Deploy kafka
2. Deploy kafka topics artifact that contains the required topics for all microservices
3. Deploy microservices
II. Every service deploys its own topics. The deployment looks as follows:
1. Deploy Kafka
2. Deploy microservices. Within each microservice deployment the required topics are created.
I see the value in option I. I could see all the topics that are deployed, partitions, retention and compaction policies. This would help be understand kafka resource allocation and its configuration. The downside of this option is a potential coupling, where I would need to deploy a single microservice on a running system. In this scenario I would need to deploy a new version of Kafka topics artifact and the microservice itself.
What are the best practices here?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
就我个人而言,我已经看到了这两种方法,这取决于您的批准过程的工作状况,以及您是否真的想微观管理服务器资源使用率(配额,访问控制,可发现性等)。
例如 - 团队希望制作一个Kafka POC,并且不想等待“ Kafka Admin批准”来创建主题,因此他们在代码中使用
adminclient
api来快速启动。但是,kafka流将自己创建中间的状态主题,因此您不能始终如一地创建所需的所有主题。其他示例 - Kafka团队希望审核并控制使用如何/何时使用主题以及它们的大小,因此他们设置了诸如OpenPolyCyagent之类的工具,以控制中央回购中可以定义的内容。他们还可以设置管理UI面板以创建/发现主题。
然后是中间立场 - 每个“团队/组织”主题的回购。
注意:您可以使用Terraform,Kubernetes操作员,Ansible等来管理Kafka主题;不必是“典型” KAFKA客户端工具。而且,如果您使用这些工具,那么您并不是真正“部署主题伪像”;取而代之的是,您可以通过jenkins,github动作等通过gitops运行这些操作。
Personally, I have seen both approaches, and it comes down to how well your approval processes work and if you really want to micro-manage server resource usage (quotas, access control, discoverability, etc).
For example - a team wants to make a Kafka POC and does not want to wait for "Kafka admin approval" to create topics, so they use
AdminClient
API in their code to get started quickly. Kafka Streams, however, will create intermediate stateful topics on its own, and so you cannot consistently create all the topics it needs ahead of time.Other example - the Kafka team wants to audit and control how/when topics are used and how large they can be, so they setup tools like OpenPolicyAgent to control what can be defined in a central repo. They may also setup an admin UI panel to create/discover topics.
Then there is a middle ground - one repo per "team/organization" for topics.
Note: You can use Terraform, Kubernetes Operators, Ansible, etc for managing Kafka topics; doesn't have to be "typical" Kafka client tools. And if you use these tools, you're not really "deploying a topic artifact"; instead you can run these through a GitOps flow with Jenkins, Github Actions, etc.
也许您可以根据开发环境做出一种方法。您可以在开发人员中没有限制,但是在产品中得到了完全管理,您的测试env可以成为一种混合。
从现实世界的经验来看,它不会如何创建主题,但真正重要的是在其上强制执行的约束。两个关键是:
因此,在Kafka充当中心神经的组织中,大多数情况下,Kafka群集维护者将获得主题创建的批准。他们维护git仓库,并要求其他团队检查主题配置并在其上运行一些工具以进行主题。因此,它成为所有主题的受控env,诸如谁拥有的主题,并可以检查请求的配置是否正常,并且可以要求解释。
这些都是根据群集的使用方式而有意义的。一个简单的情况是保留字节/时间。显然是基于用例。因此,管理员可以要求请求者减少比默认一个配置以将存储保存在集群中的默认值。
May be you can make an approach based on development environment. You can have no restriction in dev but fully managed in PROD and your testing env can be a mix.
From a real-world experience it wont matter how a topic is created but what really matters is the constraints enforced on top of it. Two key things are :
So in organisations where Kafka acts as a central nerve, mostly there would be an approval process from Kafka cluster maintainers for topic creation. They maintain a git repo and asks other teams to checkin the topic configurations and run some tool on top of it for topics. So it becomes a controlled env with a single point of source for all topics like who owns it and can check if the requested configurations are ok and can ask for explanation.
These all makes sense based on how widely the cluster going to be used. A simple scenario would be retention bytes/time. It's clearly based on use case. So an admin can ask requestors to reduce than the default one configured to save storage in cluster.