有没有办法在SASL身份验证期间了解KAFKA客户端(生产者或消费者)的第一个经纪端口

发布于 2025-02-03 09:29:19 字数 94 浏览 2 评论 0原文

我正在尝试使用SASL身份验证来创建Kafka生产商和消费者。在SASL身份验证期间,我想了解从通过的经纪人列表中首次选择的特定经纪人和端口。 我只知道主机名但不知道端口号。

I am trying to create Kafka producer and consumer with SASL authentication. During SASL authentication I would like to know about the specific broker and port that is selected for very first time from the list of brokers passed.
I only know hostname but not port number.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

余生一个溪 2025-02-10 09:29:19

只知道主机名但不知道端口号

Bootstrap-Server配置

,因此我不确定我理解这一点...

第一次从经纪人列表中选择的特定经纪人和端口

这无关紧要。所有经纪人都可以响应相同的请求,并且所有经纪人信息将在生产者启动之前退还给客户。您可以使用adminclient.describecluster用于群集信息。

但是在我看来,您需要与配置advertied.listeners的Kafka管理员交谈。

您也可以使用kCat -l <​​/code>选项远程找到此信息。

想根据Producer.Send()

在端口上拾取的端口更改经纪人Servername。

无法修改生产商批处理中的数据,以基于端口的端口来更改经纪人Servername。广告听众是网络协议将使用的内容。

您需要实现一个代理,该代理将将数据重定向到其他地方或拦截Kafka TCP数据包并重写它们以执行其他操作。


在Java中,要查找将发送特定记录的确切服务器,您需要使用defaultPartitioner class来查找每个记录的分区,使用其键。假设您的记录具有非零键。
然后,您需要使用adminclient实例DeficteTopics。描述主题的结果返回了主题分区列表。通过分区获得记录的过滤。
这些具有一个Leader()方法,该方法返回具有主机和端口信息的节点对象。领导者分区是当生产者缓冲区冲洗时将发送数据的位置。

< javadoc/org/apache/kafka/common/node.html

对于消费者,查看其分区分配,然后执行相同的操作。

only know hostname but not port number

A port is required by bootstrap-server config, so I'm not sure I understand this...

the specific broker and port that is selected for very first time from the list of brokers passed

The one that is selected shouldn't matter. All brokers can respond to the same request, and all broker info will be returned to the client before the producer starts. You can use AdminClient.describeCluster for cluster information.

But sounds to me like you need to talk to the kafka admin who configured advertised.listeners.

You can also use kcat -L option to find this information remotely.

would like to change the broker servername based on port picked up during producer.send()

You cannot modify data in a Producer batch. The advertised listeners are what will be used by the network protocol.

You'd need to implement a proxy that would redirect the data somewhere else or intercept the Kafka TCP packets and rewrite them to do anything else.


In Java, to find the exact server a particular record would be sent, you'd need to use the DefaultPartitioner class to find the partition each record would be sent to, using its key. This assumes your records have non null keys.
Then you need to describeTopics using an AdminClient instance. The result of describing a topic returns a list of TopicPartitionInfo. Filter this by the partition gotten for the record.
These have a leader() method that returns a Node object, which has host and port information. The leader partition is where the data will be sent when the producer buffer is flushed.

https://kafka.apache.org/31/javadoc/org/apache/kafka/common/Node.html

For a consumer, look at its partition assignment, then do the same.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文