连接到在 Docker 中运行的 Kafka

发布于 2025-01-10 20:52:24 字数 2909 浏览 0 评论 0原文

我在本地计算机上设置了一个单节点 Kafka Docker 容器,如 Confluence 文档(步骤 2-3)。

此外,我还公开了 Zookeeper 的端口 2181 和 Kafka 的端口 9092,以便我能够从本地计算机上运行的客户端连接到它们:

$ docker run -d \
    -p 2181:2181 \
    --net=confluent \
    --name=zookeeper \
    -e ZOOKEEPER_CLIENT_PORT=2181 \
    confluentinc/cp-zookeeper:4.1.0

$ docker run -d \
    --net=confluent \
    --name=kafka \
    -p 9092:9092 \
    -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
    -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
    -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
    confluentinc/cp-kafka:4.1.0

问题: 当我尝试从主机,连接失败,因为它无法解析地址:kafka:9092

这是我的 Java 代码:

Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("client.id", "KafkaExampleProducer");
props.put("key.serializer", LongSerializer.class.getName());
props.put("value.serializer", StringSerializer.class.getName());
KafkaProducer<Long, String> producer = new KafkaProducer<>(props);
ProducerRecord<Long, String> record = new ProducerRecord<>("foo", 1L, "Test 1");
producer.send(record).get();
producer.flush();

异常:

java.io.IOException: Can't resolve address: kafka:9092
    at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235) ~[kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.common.network.Selector.connect(Selector.java:214) ~[kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:176) [kafka-clients-2.0.0.jar:na]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
Caused by: java.nio.channels.UnresolvedAddressException: null
    at sun.nio.ch.Net.checkAddress(Net.java:101) ~[na:1.8.0_144]
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) ~[na:1.8.0_144]
    at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233) ~[kafka-clients-2.0.0.jar:na]
    ... 7 common frames omitted

问题:如何连接到在 Docker 中运行的 Kafka?我的代码是从主机运行的,而不是 Docker。

注意:我知道理论上我可以尝试 DNS 设置和 /etc/hosts 但这只是一种解决方法 - 它不应该是这样的。

这里也有类似的问题,但它是基于ches/kafka图像。我使用基于 confluenceinc 的图像,但它不一样。

I setup a single node Kafka Docker container on my local machine like it is described in the Confluent documentation (steps 2-3).

In addition, I also exposed Zookeeper's port 2181 and Kafka's port 9092 so that I'll be able to connect to them from a client running on local machine:

$ docker run -d \
    -p 2181:2181 \
    --net=confluent \
    --name=zookeeper \
    -e ZOOKEEPER_CLIENT_PORT=2181 \
    confluentinc/cp-zookeeper:4.1.0

$ docker run -d \
    --net=confluent \
    --name=kafka \
    -p 9092:9092 \
    -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
    -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
    -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
    confluentinc/cp-kafka:4.1.0

Problem: When I try to connect to Kafka from the host machine, the connection fails because it can't resolve address: kafka:9092.

Here is my Java code:

Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("client.id", "KafkaExampleProducer");
props.put("key.serializer", LongSerializer.class.getName());
props.put("value.serializer", StringSerializer.class.getName());
KafkaProducer<Long, String> producer = new KafkaProducer<>(props);
ProducerRecord<Long, String> record = new ProducerRecord<>("foo", 1L, "Test 1");
producer.send(record).get();
producer.flush();

The exception:

java.io.IOException: Can't resolve address: kafka:9092
    at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235) ~[kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.common.network.Selector.connect(Selector.java:214) ~[kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:176) [kafka-clients-2.0.0.jar:na]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
Caused by: java.nio.channels.UnresolvedAddressException: null
    at sun.nio.ch.Net.checkAddress(Net.java:101) ~[na:1.8.0_144]
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) ~[na:1.8.0_144]
    at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233) ~[kafka-clients-2.0.0.jar:na]
    ... 7 common frames omitted

Question: How to connect to Kafka running in Docker? My code is running from host machine, not Docker.

Note: I know that I could theoretically play around with DNS setup and /etc/hosts but it is a workaround - it shouldn't be like that.

There is also similar question here, however it is based on ches/kafka image. I use confluentinc based image which is not the same.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

孤芳又自赏 2025-01-17 20:52:24

tl;dr - 从容器到主机的简单端口转发将不起作用...主机文件(例如/etc/hosts *NIX 系统)不应修改以解决 Kafka 网络问题,因为此解决方案不可移植。

1) 您想要连接到哪个确切的 IP/主机名 + 端口?确保该值在经纪人。如果您看到诸如Connection to node -1 (localhost/127.0.0.1:9092)之类的错误,则意味着您的应用容器尝试连接到自身。您的应用程序容器是否也在运行 Kafka 代理进程?可能不会。

2) 确保作为 bootstrap.servers 一部分列出的服务器实际上是可解析的。例如 ping IP/主机名,使用 netcat 检查端口...如果您的客户端位于容器中,则需要从容器中执行此操作,不仅仅是您的主机。如果容器没有立即崩溃以访问其 shell,请使用 docker exec 。

3) 如果从主机而不是另一个容器运行进程,要验证主机上的端口是否正确映射,请确保 docker ps 表明 kafka 容器是从 0.0.0.0 映射的:<主机端口> ->/tcp。如果尝试从 Docker 网络外部运行客户端,则端口必须匹配。两个容器之间不需要端口转发;使用链接/docker网络


下面的答案使用 confluenceinc docker 镜像来解决所提出的问题,不是 wurstmeister/kafka。如果您设置了 KAFKA_ADVERTISED_HOST_NAME 变量,请将其删除(这是一个已弃用的属性)。相同的变量适用于 apache/kafka 图像。

以下部分尝试汇总使用另一个图像所需的所有详细信息。对于其他常用的 Kafka 镜像,都是在容器中运行的 Apache Kafka
您只依赖于它的配置方式。以及哪些变量导致了这种情况。

apache/kafka

现在有官方图片了!在下面查找示例 Compose 文件的链接。

香肠/卡夫卡

自 2023 年 10 月起,DockerHub 中不再存在此内容。无论如何,2022 年之后就不再维护了。

请参阅有关监听器配置的自述文件部分,另外阅读他们的连接维基

bitnami/kafka

如果您想要一个小容器,请尝试这些。这些图像比 Confluence 小得多,并且比 wurstmeister 维护得更好。 参考他们的监听器配置的自述文件

debezium/kafka

此处提到了相关文档此处

注意:已弃用公布的主机和端口设置。广告听众涵盖了两者。与 Confluence 容器类似,Debezium 可以使用 KAFKA_ 前缀的代理设置来更新其属性。

其他

  • ubuntu/kafka 要求您通过 Docker 映像参数添加 --overrideadvertising.listeners=kafka:9092...我发现它比环境变量的可移植性差,所以不推荐
  • spotify/kafka 已弃用且已过时。
  • fast-data-devlensesio/box 非常适合一体化解决方案,具有架构注册表、Kafka Connect 等,但如果您 <仅仅想要卡夫卡。另外,这是一种在一个容器中运行多个服务的 Docker 反模式
  • 您自己的 Dockerfile - 为什么?这些其他的东西是否不完整?从拉取请求开始,而不是从头开始。

如需补充阅读、功能齐全 docker-compose 和网络图,请参阅@rmoff 的此博客

回答

Confluence 快速入门 (Docker) 文档 假设所有生产和消费请求都将位于 Docker 网络内。

您可以通过在自己的容器中运行 Kafka 客户端代码(使用 Docker 网桥)来解决连接到 kafka:9092 的问题,但否则您需要添加更多环境变量来公开容器在外部,同时仍然在 Docker 网络中工作。

首先添加 PLAINTEXT_HOST:PLAINTEXT 协议映射,它将侦听器协议映射到 Kafka 协议

键:KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
值:PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT

然后在不同的端口上设置两个通告的侦听器。 (kafka 这里指的是 docker 容器名称;它也可能被命名为 broker,因此请仔细检查您的服务 + 主机名)。

键:KAFKA_ADVERTISED_LISTENERS
值:PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092

注意此处的协议与上面协议映射设置的左侧值匹配

运行时容器,添加 -p 29092:29092 作为主机端口映射,并通告 PLAINTEXT_HOST 侦听器。


所以...(使用上述设置

如果某些东西仍然不起作用,可以将KAFKA_LISTENERS设置为包含://0.0.0.0:< /code> 其中两个选项都与公布的设置和 Docker 转发的端口匹配


客户端位于同一台计算机上,而不是在容器中

广告本地主机和关联的端口将允许您在容器外部进行连接,正如您所期望的那样。

换句话说,当在 Docker 网络外部运行任何 Kafka 客户端(包括您可能在本地安装的 CLI 工具)时,请使用 localhost:29092 作为引导服务器(需要 Docker 端口转发) )

另一台计算机上的客户端

如果尝试从外部服务器连接,您需要公布主机的外部主机名/IP(例如 192.168.xy以及/在本地主机的位置
简单地通过端口转发来通告 localhost 是行不通的,因为 Kafka 协议仍将继续通告您已配置的侦听器。

如果不在同一本地网络中,此设置需要 Docker 端口转发和路由器端口转发(以及防火墙/安全组更改),例如,您的容器在云中运行并且您想要与其交互从您的本地计算机。

容器中的客户端(或另一个代理)位于同一主机上

这是最不容易出错的配置;您可以直接使用 DNS 服务名称。

Docker 网络中运行应用程序时,请使用 Docker 服务名称,例如 kafka:9092(请参阅上面广告的 PLAINTEXT 侦听器配置)作为引导服务器,就像任何其他 Docker 服务通信一样(不需要任何端口转发)


如果您使用单独的 docker run 命令或 Compose 文件,则需要定义共享网络手动使用撰写networks 部分或 docker network --create


查看使用 Kraft 的完整 Confluence 堆栈的示例 Compose 文件更最小的一个(使用 Zookeeper ) 对于单个经纪人。

对于 apache/kafka 映像,Kafka Github 存储库

如果使用多个代理,那么他们需要使用唯一的主机名+广告侦听器。 查看示例

相关问题

从 Docker 连接到主机上的 Kafka (ksqlDB)

附录

对于任何对 Kubernetes 部署感兴趣的人:

tl;dr - A simple port forward from the container to the host will not work... Hosts files (e.g. /etc/hosts on *NIX systems) should not be modified to work around Kafka networking, as this solution is not portable.

1) What exact IP/hostname + port do you want to connect to? Make sure that value is set as advertised.listeners (not advertised.host.name and advertised.port, as these are deprecated) on the broker. If you see an error such as Connection to node -1 (localhost/127.0.0.1:9092), then that means your app container tries to connect to itself. Is your app container also running a Kafka broker process? Probably not.

2) Make sure that the server(s) listed as part of bootstrap.servers are actually resolvable. E.g ping an IP/hostname, use netcat to check ports... If your clients are in a container, you need to do this from the container, not (only) your host. Use docker exec if the container isn't immediately crashing to get to its shell.

3) If running a process from the host, rather than another container, to verify the ports are mapped correctly on the host, ensure that docker ps shows the kafka container is mapped from 0.0.0.0:<host_port> -> <advertised_listener_port>/tcp. The ports must match if trying to run a client from outside the Docker network. You do not need port forwarding between two containers; use links / docker networks


The below answer uses confluentinc docker images to address the question that was asked, not wurstmeister/kafka. If you have KAFKA_ADVERTISED_HOST_NAME variable set, remove it (it's a deprecated property). The same variables apply to apache/kafka image.

The following sections try to aggregate all the details needed to use another image. For other, commonly used Kafka images, it's all the same Apache Kafka running in a container.
You're just dependent on how it is configured. And which variables make it so.

apache/kafka

There's now an official image! Find links below for example Compose files.

wurstmeister/kafka

As of Oct 2023, this no longer exists in DockerHub. Wasn't maintained past 2022, anyway.

Refer their README section on listener configuration, Also read their Connectivity wiki.

bitnami/kafka

If you want a small container, try these. The images are much smaller than the Confluent ones and are much more well maintained than wurstmeister. Refer their README for listener configuration.

debezium/kafka

Docs on it are mentioned here.

Note: advertised host and port settings are deprecated. Advertised listeners covers both. Similar to the Confluent containers, Debezium can use KAFKA_ prefixed broker settings to update its properties.

Others

  • ubuntu/kafka requires you to add --override advertised.listeners=kafka:9092 via Docker image args... I find that less portable than environment variables, so not recommended
  • spotify/kafka is deprecated and outdated.
  • fast-data-dev or lensesio/box are great for an all in one solution, with Schema Registry, Kafka Connect, etc, but are bloated if you only want Kafka. Plus, it's a Docker anti pattern to run many services in one container
  • Your own Dockerfile - Why? Is something incomplete with these others? Start with a pull request, not starting from scratch.

For supplemental reading, a fully-functional docker-compose, and network diagrams, see this blog by @rmoff

Answer

The Confluent quickstart (Docker) document assumes all produce and consume requests will be within the Docker network.

You could fix the problem of connecting to kafka:9092 by running your Kafka client code within its own container as that uses the Docker network bridge, but otherwise you'll need to add some more environment variables for exposing the container externally, while still having it work within the Docker network.

First add a protocol mapping of PLAINTEXT_HOST:PLAINTEXT that will map the listener protocol to a Kafka protocol

Key: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
Value: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT

Then setup two advertised listeners on different ports. (kafka here refers to the docker container name; it might also be named broker, so double check your service + hostnames).

Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092

Notice the protocols here match the left-side values of the protocol mapping setting above

When running the container, add -p 29092:29092 for the host port mapping, and advertised PLAINTEXT_HOST listener.


So... (with the above settings)

If something still doesn't work, KAFKA_LISTENERS can be set to include <PROTOCOL>://0.0.0.0:<PORT> where both options match the advertised setting and Docker-forwarded port

Client on same machine, not in a container

Advertising localhost and the associated port will let you connect outside of the container, as you'd expect.

In other words, when running any Kafka Client outside the Docker network (including CLI tools you might have installed locally), use localhost:29092 for bootstrap servers (requires Docker port forwarding)

Client on another machine

If trying to connect from an external server, you'll need to advertise the external hostname/ip (e.g. 192.168.x.y) of the host as well as/in place of localhost.
Simply advertising localhost with a port forward will not work because Kafka protocol will still continue to advertise the listeners you've configured.

This setup requires Docker port forwarding and router port forwarding (and firewall / security group changes) if not in the same local network, for example, your container is running in the cloud and you want to interact with it from your local machine.

Client (or another broker) in a container, on the same host

This is the least error-prone configuration; you can use DNS service names directly.

When running an app in the Docker network, use Docker service names such as kafka:9092 (see advertised PLAINTEXT listener config above) for bootstrap servers, just like any other Docker service communication (doesn't require any port forwarding)


If you use separate docker run commands, or Compose files, you need to define a shared network manually using compose networks section or docker network --create


See the example Compose file for the full Confluent stack using Kraft or more minimal one (with Zookeeper) for a single broker.

For apache/kafka image, there's example files in the Kafka Github repo.

If using multiple brokers, then they need to use unique hostnames + advertised listeners. See example

Related question

Connect to Kafka on host from Docker (ksqlDB)

Appendix

For anyone interested in Kubernetes deployments:

裂开嘴轻声笑有多痛 2025-01-17 20:52:24

当你第一次连接到一个kafka节点时,它会返回所有的kafka节点和连接的url。然后你的应用程序将尝试直接连接到每个kafka。

问题始终是 kafka 会给你什么 url ?这就是为什么有 KAFKA_ADVERTISED_LISTENERS ,kafka 将使用它来告诉世界如何访问它。

现在,对于您的用例,有多个小问题需要考虑:

假设您设置了 plaintext://kafka:9092 ,

  • 如果您的 docker compose 中有一个使用 kafka 的应用程序,那么这是可以的。该应用程序将从 kafka 获取可通过 docker 网络解析的 kafka URL。
  • 如果您尝试从主系统或不在同一 Docker 网络中的另一个容器进行连接,则会失败,因为无法解析 kafka 名称。

==>为了解决这个问题,你需要有一个特定的 DNS 服务器,比如服务发现服务器,但这对于小东西来说是个大麻烦。或者您手动将 kafka 名称设置为每个 /etc/hosts 中的容器 ip

如果您设置 plaintext://localhost:9092

  • 这将如果你有端口映射(启动kafka时为-p 9092:9092),那么在你的系统上就可以了。
  • 如果你从容器上的应用程序(是否相同的docker网络)进行测试,这将会失败(localhost是容器本身而不是kafka)一)

==>如果您有此问题并希望在另一个容器中使用 kafka 客户端,解决此问题的一种方法是共享两个容器的网络(相同的 IP)

最后一个选项:在名称中设置 IP:plaintext://xyza :9092 (kafka 广告的 url 不能为 0.0.0.0,如文档 https://kafka.apache.org/documentation/#brokerconfigs_advertished.listeners

这将是每个人都可以...但是你怎么能得到 xyza 名字呢?

唯一的方法是在启动容器时硬编码此 IP:docker run .... --net confluence --ip 10.xyz ...。请注意,您需要将 ip 调整为 confluence 子网中的一个有效 ip。

When you first connect to a kafka node, it will give you back all the kafka node and the url where to connect. Then your application will try to connect to every kafka directly.

Issue is always what is the kafka will give you as url ? It's why there is the KAFKA_ADVERTISED_LISTENERS which will be used by kafka to tell the world how it can be accessed.

Now for your use-case, there is multiple small stuff to think about:

Let say you set plaintext://kafka:9092

  • This is OK if you have an application in your docker compose that use kafka. This application will get from kafka the URL with kafka that is resolvable through the docker network.
  • If you try to connect from your main system or from another container which is not in the same docker network this will fail, as the kafka name cannot be resolved.

==> To fix this, you need to have a specific DNS server like a service discovery one, but it is big trouble for small stuff. Or you set manually the kafka name to the container ip in each /etc/hosts

If you set plaintext://localhost:9092

  • This will be ok on your system if you have a port mapping ( -p 9092:9092 when launching kafka)
  • This will fail if you test from an application on a container (same docker network or not) (localhost is the container itself not the kafka one)

==> If you have this and wish to use a kafka client in another container, one way to fix this is to share the network for both container (same ip)

Last option : set an IP in the name: plaintext://x.y.z.a:9092 ( kafka advertised url cannot be 0.0.0.0 as stated in the doc https://kafka.apache.org/documentation/#brokerconfigs_advertised.listeners )

This will be ok for everybody... BUT how can you get the x.y.z.a name ?

The only way is to hardcode this ip when you launch the container: docker run .... --net confluent --ip 10.x.y.z .... Note that you need to adapt the ip to one valid ip in the confluent subnet.

盗心人 2025-01-17 20:52:24

在 Zookeeper

  1. docker 容器运行之前 --name Zookeeper -p 2181:2181 Zookeeper

在 kafka

  1. docker 容器运行之后 --name kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=192.168.8.128:2181 -e confluenceinc/cp-kafka

KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip_address_of_your_computer_but_not_localhost!!!:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 kafka 消费者和生产者配置中的

@Bean
public ProducerFactory<String, String> producerFactory() {
    Map<String, Object> configProps = new HashMap<>();
    configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
    configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    return new DefaultKafkaProducerFactory<>(configProps);
}

@Bean
public ConsumerFactory<String, String> consumerFactory() {
    Map<String, Object> props = new HashMap<>();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id");
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    return new DefaultKafkaConsumerFactory<>(props);
}

我按照这些规定运行我的项目。祝你好运,伙计。

before zookeeper

  1. docker container run --name zookeeper -p 2181:2181 zookeeper

after kafka

  1. docker container run --name kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=192.168.8.128:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip_address_of_your_computer_but_not_localhost!!!:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka

in kafka consumer and producer config

@Bean
public ProducerFactory<String, String> producerFactory() {
    Map<String, Object> configProps = new HashMap<>();
    configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
    configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    return new DefaultKafkaProducerFactory<>(configProps);
}

@Bean
public ConsumerFactory<String, String> consumerFactory() {
    Map<String, Object> props = new HashMap<>();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id");
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    return new DefaultKafkaConsumerFactory<>(props);
}

I run my project with these regulations. Good luck dude.

屋顶上的小猫咪 2025-01-17 20:52:24

这允许我在 M1 Mac 上的 Kafka 应用程序中访问 localhost:9092

Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092

以及端口转发:

ports
   - "9092:9092"

最后,再次,对于我的设置,我必须以这种方式设置侦听器密钥

Key: KAFKA_LISTENERS
Value: PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092

This allows me to access localhost:9092 in Kafka applications on my M1 Mac

Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092

plus port forwarding :

ports
   - "9092:9092"

Finally, again, for my set up, I have to set listeners key this way

Key: KAFKA_LISTENERS
Value: PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092
天暗了我发光 2025-01-17 20:52:24

解决此问题的最简单方法是使用 -h 选项向代理添加自定义主机名

docker run -d \
    --net=confluent \
    --name=kafka \
    -h broker-1 \
    -p 9092:9092 \
    -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
    -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
    -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
    confluentinc/cp-kafka:4.1.0

,然后编辑 /etc/hosts

127.0.0.1   broker-1

并使用:

props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker-1:9092");

the simplest way to solve this is adding a custom hostname to your broker using -h option

docker run -d \
    --net=confluent \
    --name=kafka \
    -h broker-1 \
    -p 9092:9092 \
    -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
    -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
    -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
    confluentinc/cp-kafka:4.1.0

and edit your /etc/hosts

127.0.0.1   broker-1

and use:

props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker-1:9092");
老旧海报 2025-01-17 20:52:24
      KAFKA_BROKER_ID: 1
      KAFKA_ADVERTISED_HOST_NAME: kafka:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

此配置工作正常,

请确保从 docker 内部

从外部容器 localhost:9092 连接 kafka:29092

完整工作 docker compose 配置

    version: "3.3"

    services:
      zookeeper:
        image: confluentinc/cp-zookeeper:6.2.0
        container_name: zookeeper
        networks:
          - broker-kafka
        ports:
          - "2181:2181"
        environment:
          ZOOKEEPER_CLIENT_PORT: 2181
          ZOOKEEPER_TICK_TIME: 2000
          ALLOW_ANONYMOUS_LOGIN: yes
        volumes:
          - ./bitnami/zookeeper:/bitnami/zookeeper
    
      kafka:
        image: confluentinc/cp-kafka:6.2.0
        container_name: kafka
        networks:
          - broker-kafka
        depends_on:
          - zookeeper
        ports:
          - "9092:9092"
        expose:
          - "9092"
        environment:
          KAFKA_BROKER_ID: 1
          KAFKA_ADVERTISED_HOST_NAME: kafka:9092
          KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
          KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
          KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
          KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
          KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
          # KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
          # KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
          # KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
          KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
          # KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
          KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
        volumes:
          - ./bitnami/kafka:/bitnami/kafka
    
      kafdrop:
        image: obsidiandynamics/kafdrop
        container_name: kafdrop
        ports:
          - "9000:9000"
        expose:
          - "9000"
        networks:
          - broker-kafka
        environment:
          KAFKA_BROKERCONNECT: "PLAINTEXT://kafka:29092"
          JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
          SPRING_PROFILES_ACTIVE: "dev"
        depends_on:
          - kafka
          - zookeeper
    
      consumer:
        container_name: consumer
        build:
          context: ./consumer
          dockerfile: Dockerfile
        environment:
          - KAFKA_TOPIC_NAME=app
          - KAFKA_SERVER=kafka
          - KAFKA_PORT=29092
        ports:
          - 8001:8001
        restart: "always"
        depends_on:
          - zookeeper
          - kafka
          - publisher
          - kafdrop
        networks:
          - broker-kafka
    
      publisher:
        container_name: publisher
        build:
          context: ./producer
          dockerfile: Dockerfile
        environment:
          - KAFKA_TOPIC_NAME=app
          - KAFKA_SERVER=kafka
          - KAFKA_PORT=29092
        ports:
          - 8000:8000
        restart: "always"
        depends_on:
          - zookeeper
          - kafka
          - kafdrop
        networks:
          - broker-kafka
        volumes:
          - ./testproducer:/producer
    
    networks:
      broker-kafka:
        driver: bridge
      KAFKA_BROKER_ID: 1
      KAFKA_ADVERTISED_HOST_NAME: kafka:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

This configuration works fine

make sure from inside docker you are connecting kafka:29092

from outside container localhost:9092

full working docker compose config

    version: "3.3"

    services:
      zookeeper:
        image: confluentinc/cp-zookeeper:6.2.0
        container_name: zookeeper
        networks:
          - broker-kafka
        ports:
          - "2181:2181"
        environment:
          ZOOKEEPER_CLIENT_PORT: 2181
          ZOOKEEPER_TICK_TIME: 2000
          ALLOW_ANONYMOUS_LOGIN: yes
        volumes:
          - ./bitnami/zookeeper:/bitnami/zookeeper
    
      kafka:
        image: confluentinc/cp-kafka:6.2.0
        container_name: kafka
        networks:
          - broker-kafka
        depends_on:
          - zookeeper
        ports:
          - "9092:9092"
        expose:
          - "9092"
        environment:
          KAFKA_BROKER_ID: 1
          KAFKA_ADVERTISED_HOST_NAME: kafka:9092
          KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
          KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
          KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
          KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
          KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
          # KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
          # KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
          # KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
          KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
          # KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
          KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
        volumes:
          - ./bitnami/kafka:/bitnami/kafka
    
      kafdrop:
        image: obsidiandynamics/kafdrop
        container_name: kafdrop
        ports:
          - "9000:9000"
        expose:
          - "9000"
        networks:
          - broker-kafka
        environment:
          KAFKA_BROKERCONNECT: "PLAINTEXT://kafka:29092"
          JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
          SPRING_PROFILES_ACTIVE: "dev"
        depends_on:
          - kafka
          - zookeeper
    
      consumer:
        container_name: consumer
        build:
          context: ./consumer
          dockerfile: Dockerfile
        environment:
          - KAFKA_TOPIC_NAME=app
          - KAFKA_SERVER=kafka
          - KAFKA_PORT=29092
        ports:
          - 8001:8001
        restart: "always"
        depends_on:
          - zookeeper
          - kafka
          - publisher
          - kafdrop
        networks:
          - broker-kafka
    
      publisher:
        container_name: publisher
        build:
          context: ./producer
          dockerfile: Dockerfile
        environment:
          - KAFKA_TOPIC_NAME=app
          - KAFKA_SERVER=kafka
          - KAFKA_PORT=29092
        ports:
          - 8000:8000
        restart: "always"
        depends_on:
          - zookeeper
          - kafka
          - kafdrop
        networks:
          - broker-kafka
        volumes:
          - ./testproducer:/producer
    
    networks:
      broker-kafka:
        driver: bridge
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文