AWS MSK-内部经纪人通讯

发布于 2025-01-29 06:04:51 字数 1000 浏览 2 评论 0原文

我正在使用AWS MSK进行生产工作负载,我们一直在注意CloudWatch中一些不太清晰的日志消息。这些消息是关于经纪人之间的内部通信(稍后会详细介绍集群设置):

[2022-05-14 06:50:17,171] INFO [SocketServer brokerId=2] Failed authentication with ec2-18-185-175-128.eu-central-1.compute.amazonaws.com/18.185.175.128 ([97fe8ff0-ee38-46c5-ae21-1545fd571224]: Access denied) (org.apache.kafka.common.network.Selector)

我们的日志中充斥着这些重复的消息。根据上述消息,可以在所有三个经纪人上找到日志,所有日志都引用brokerId = 2。 我假设引用的实例是MSK经纪人之一。

尽管日志处于信息级别,并且群集似乎正常工作,但我想了解是否有人必须面对这些输出消息?

MSK配置如下:

  • 3个经纪人在运输中的3个可用性区域
  • 加密,client_broker = tls,cluster
  • Client_authentication中的加密sasl i am
  • cluster属性: auto.create.topics.topics.enable = true = true,default.replication.replication.factor.factor.factor = 3,num.partitions = 3,delete.topic.enable = true,min.insync.Replicas = 2,log.teriontion.retention.hours = 168,compression.type = gzip
  • kafka版本:2.7.0

我将是有兴趣知道如何摆脱此日志消息,以及是否应该担心。

谢谢, 阿里西奥

I am using AWS MSK for our production workload and we have been noticing some not very clear log messages in cloudwatch. The messages are about the internal communication between brokers (more on cluster setup later):

[2022-05-14 06:50:17,171] INFO [SocketServer brokerId=2] Failed authentication with ec2-18-185-175-128.eu-central-1.compute.amazonaws.com/18.185.175.128 ([97fe8ff0-ee38-46c5-ae21-1545fd571224]: Access denied) (org.apache.kafka.common.network.Selector)

Our logs are cluttered with these recurring messages. The logs can be found on all three brokers, all referencing the brokerId=2, as per the message above.
I am assuming the instance referenced is one of the MSK brokers.

Whilst the logs are at INFO level and the cluster seems to work fine, I'd like to understand if anyone had to face these sorts of output messages before?

The MSK config is the following:

  • 3 brokers over 3 availability zones
  • encryption in transit,client_broker = TLS, encryption in cluster
  • client_authentication sasl I am
  • cluster properties: auto.create.topics.enable = true, default.replication.factor = 3, num.partitions = 3, delete.topic.enable = true, min.insync.replicas = 2, log.retention.hours = 168, compression.type = gzip
  • kafka version: 2.7.0

I would be interested to know how to get rid of this log message and if this should be a matter of worry.

Thanks,
Alessio

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文