连接到在Docker中运行的Kafka

发布于 2025-02-11 15:25:30 字数 2929 浏览 1 评论 0 原文

我在本地计算机上设置了一个单个节点kafka docker容器,如汇合文档(步骤2-3)。

此外,我还暴露了Zookeeper的端口2181和Kafka的端口9092,以便我能够从运行在本地机器上运行的客户端连接到它们:

$ docker run -d \
    -p 2181:2181 \
    --net=confluent \
    --name=zookeeper \
    -e ZOOKEEPER_CLIENT_PORT=2181 \
    confluentinc/cp-zookeeper:4.1.0

$ docker run -d \
    --net=confluent \
    --name=kafka \
    -p 9092:9092 \
    -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
    -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
    -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
    confluentinc/cp-kafka:4.1.0

问题:当我尝试从主机机器,连接失败,因为无法解析地址:KAFKA:9092

这是我的Java代码:

Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("client.id", "KafkaExampleProducer");
props.put("key.serializer", LongSerializer.class.getName());
props.put("value.serializer", StringSerializer.class.getName());
KafkaProducer<Long, String> producer = new KafkaProducer<>(props);
ProducerRecord<Long, String> record = new ProducerRecord<>("foo", 1L, "Test 1");
producer.send(record).get();
producer.flush();

例外:

java.io.IOException: Can't resolve address: kafka:9092
    at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235) ~[kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.common.network.Selector.connect(Selector.java:214) ~[kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:176) [kafka-clients-2.0.0.jar:na]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
Caused by: java.nio.channels.UnresolvedAddressException: null
    at sun.nio.ch.Net.checkAddress(Net.java:101) ~[na:1.8.0_144]
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) ~[na:1.8.0_144]
    at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233) ~[kafka-clients-2.0.0.jar:na]
    ... 7 common frames omitted

问题:如何连接到Docker中运行的Kafka?我的代码是从主机机器而不是Docker运行的。

注意:我知道我可以从理论上玩DNS设置和/etc/hosts ,但这是解决方法 - 不应该那样。

还有类似的问题在这里,但是它基于 ches/kafka 图像。我使用 Confluentinc 基于的图像,这不是相同的。

I setup a single node Kafka Docker container on my local machine like it is described in the Confluent documentation (steps 2-3).

In addition, I also exposed Zookeeper's port 2181 and Kafka's port 9092 so that I'll be able to connect to them from a client running on local machine:

$ docker run -d \
    -p 2181:2181 \
    --net=confluent \
    --name=zookeeper \
    -e ZOOKEEPER_CLIENT_PORT=2181 \
    confluentinc/cp-zookeeper:4.1.0

$ docker run -d \
    --net=confluent \
    --name=kafka \
    -p 9092:9092 \
    -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
    -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
    -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
    confluentinc/cp-kafka:4.1.0

Problem: When I try to connect to Kafka from the host machine, the connection fails because it can't resolve address: kafka:9092.

Here is my Java code:

Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("client.id", "KafkaExampleProducer");
props.put("key.serializer", LongSerializer.class.getName());
props.put("value.serializer", StringSerializer.class.getName());
KafkaProducer<Long, String> producer = new KafkaProducer<>(props);
ProducerRecord<Long, String> record = new ProducerRecord<>("foo", 1L, "Test 1");
producer.send(record).get();
producer.flush();

The exception:

java.io.IOException: Can't resolve address: kafka:9092
    at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235) ~[kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.common.network.Selector.connect(Selector.java:214) ~[kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238) [kafka-clients-2.0.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:176) [kafka-clients-2.0.0.jar:na]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
Caused by: java.nio.channels.UnresolvedAddressException: null
    at sun.nio.ch.Net.checkAddress(Net.java:101) ~[na:1.8.0_144]
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) ~[na:1.8.0_144]
    at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233) ~[kafka-clients-2.0.0.jar:na]
    ... 7 common frames omitted

Question: How to connect to Kafka running in Docker? My code is running from host machine, not Docker.

Note: I know that I could theoretically play around with DNS setup and /etc/hosts but it is a workaround - it shouldn't be like that.

There is also similar question here, however it is based on ches/kafka image. I use confluentinc based image which is not the same.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

丢了幸福的猪 2025-02-18 15:25:30

tl; dr - 一个简单的端口从容器到主机的转发不起作用 ...主机(例如*nix系统)不应修改以围绕Kafka网络工作,因为该解决方案不可移植。

1)您要连接到哪个确切的IP/主机名 +端口?确保将值设置为 advertied.listeners (不是 advertized.host.name and code>和 advertized.port.port ,因为这些已弃用)经纪人。如果您看到一个错误,例如连接到Node -1(LocalHost/127.0.0.1:9092),则意味着您的应用程序容器尝试连接到自身。您的应用程序容器还在运行KAFKA经纪流程吗?可能不是。

2)确保实际上可以解析为 bootstrap.servers 的服务器。例如 ping ip/hostName,使用 netcat 检查端口...如果您的客户端在容器中,则需要从容器,而不是(仅)您的主机。使用 docker exec 如果容器未立即崩溃以进入其外壳。

3)如果从主机而不是另一个容器中运行一个进程来验证端口在主机上正确映射,请确保 docker ps 显示kafka容器是根据 0.0.0.0.0.0.0.0. :&lt; host_port&gt; - &gt; &lt; advertised_listener_port&gt;/tcp 。如果试图从Docker网络外运行客户端,则必须匹配端口。您不需要两个容器之间的端口转发;使用链接 / Docker网络


以下答案使用 Confluentinc docker映像来解决所要求的问题,不是 wurstmeister/kafka 。如果您有 kafka_advertised_host_name 变量集,请删除它(这是不弃用的属性)。相同的变量适用于 apache/kafka 图像。

以下各节试图汇总使用另一个图像所需的所有详细信息。对于其他常用的kafka图像,都是相同的 apache kafka 在容器中运行。
您只是依赖于如何配置。和哪些变量使它成为现实。

Apache/kafka

现在有一个官方图像!在下面查找链接,例如撰写文件。

wurstmeister/kafka

截至2023年10月,这不再存在于Dockerhub中。无论如何,过去2022年都没有维护。

,也阅读其连接性wiki wiki wiki

Bitnami/kafka

如果您想要一个小容器,请尝试使用。这些图像比汇合的图像要小得多,并且比 wurstmeister 维护得多。 用于侦听器配置。


Debezium/kafka

文档上的文档 注::宣传的主机和端口设置被弃用。广告 听众 都涵盖了这两者。与Contruent容器类似,Debezium可以使用 kafka _ 前缀经纪人设置来更新其属性。

其他

  • ubuntu/kafka 要求您添加 - overdride advert.listeners.listeners = kafka:9092 通过docker image args ...我发现比环境变量的便携程度较低,因此不建议
  • spotify/kafka 已弃用和过时。
  • fast-data-dev lensesio/box 非常适合一个解决方案,带有架构注册表,kafka connect等强>唯一的想要kafka。另外,这是一个码头的反模式,可以在一个容器中运行许多服务
  • 您自己的 dockerfile - 为什么?与其他其他事物不完整吗?从拉动请求开始,不要从头开始。

对于补充阅读,完全功能 docker-compose 和网络图,请参见解释/“ rel =”此博客

答案

Confluent QuickStart(Docker)文档假设所有产品和消费请求都将在Docker网络中。

您可以解决连接到 kafka的问题:9092 通过在其自己的容器中运行kafka客户端代码,因为它使用docker网络桥梁,但是否则,您需要添加更多的环境变量以暴露于公开外部容器,同时仍在Docker网络中工作。

首先添加 PLAINTEXT_HOST的协议映射:PLAINTEXT 将将侦听器协议映射到KAFKA协议

密钥: KAFKA_LISTENER_SECURITY_SECURITY_PROTOCOL_MAP
价值:明文:明文,Plaintext_Host:Plaintext

然后在不同端口上设置两个广告的侦听器。 ( kafka 此处是指docker容器名称;它也可以命名为 broker ,因此请仔细检查您的服务 +主机名)。

键: kafka_advertised_listeners
值:明文:// KAFKA:9092,PLAINTEXT_HOST:// LOCALHOST:29092

请注意,在运行该协议时,协议在运行上面的协议映射设置的左侧值

运行时容器,添加 -p 29092:29092 为主机端口映射,并广告 PLAINTEXT_HOST 侦听器。


所以...(具有上述设置

如果某物仍然不起作用, kafka_listeners 可以设置为包括&lt; protocol&gt;:///0.0.0.0.0:&lt; port&gt; <<<<<<<<< /code>两个选项与广告设置匹配和docker-forwarded端口

客户端匹配的地方,在集装箱

广告Local主机中,相关端口将使您正如您所期望的那样,您可以在容器外连接。

换句话说,运行任何KAFKA客户端户外 docker网络(包括本地安装的CLI工具),请使用 localhost:29092 Bootstrap服务器(需要Docker端口转发)

客户端在另一台计算机上

尝试从外部服务器连接,您需要宣传主机 的外部主机名/ip(例如 192.168.xy ) /代替Local主持
简单地用端口向前广告Local -Host将无效,因为Kafka协议仍将继续宣传您配置的听众。

此设置需要Docker端口转发和< / strong>路由器端口转发(以及防火墙 /安全组更改)如果不在同一本地网络中,例如,您的容器在云中运行,您想与之交互从您当地的机器。

在容器中的客户端(或其他经纪人),在同一主机上,

这是最小容易出错的配置;您可以直接使用DNS服务名称。

在Docker网络中运行应用时,请使用docker服务名称,例如 kafka:9092 (请参阅上面的 plaintext 上面的侦听器配置) 就像任何其他Docker Service通信(不需要任何端口转发)一样


, 手动使用compose 网络部分或 docker Network -create


请参见使用kraft使用kraft 更多(with Zookeeper) >对于一个经纪人。

对于 apache/kafka 图像, kafka github repo

如果使用多个经纪人,则需要使用唯一的主机名 +广告听众。 a>

相关问题

docker(ksqldb)在主机上连接到kafka

附录

>部署:

tl;dr - A simple port forward from the container to the host will not work... Hosts files (e.g. /etc/hosts on *NIX systems) should not be modified to work around Kafka networking, as this solution is not portable.

1) What exact IP/hostname + port do you want to connect to? Make sure that value is set as advertised.listeners (not advertised.host.name and advertised.port, as these are deprecated) on the broker. If you see an error such as Connection to node -1 (localhost/127.0.0.1:9092), then that means your app container tries to connect to itself. Is your app container also running a Kafka broker process? Probably not.

2) Make sure that the server(s) listed as part of bootstrap.servers are actually resolvable. E.g ping an IP/hostname, use netcat to check ports... If your clients are in a container, you need to do this from the container, not (only) your host. Use docker exec if the container isn't immediately crashing to get to its shell.

3) If running a process from the host, rather than another container, to verify the ports are mapped correctly on the host, ensure that docker ps shows the kafka container is mapped from 0.0.0.0:<host_port> -> <advertised_listener_port>/tcp. The ports must match if trying to run a client from outside the Docker network. You do not need port forwarding between two containers; use links / docker networks


The below answer uses confluentinc docker images to address the question that was asked, not wurstmeister/kafka. If you have KAFKA_ADVERTISED_HOST_NAME variable set, remove it (it's a deprecated property). The same variables apply to apache/kafka image.

The following sections try to aggregate all the details needed to use another image. For other, commonly used Kafka images, it's all the same Apache Kafka running in a container.
You're just dependent on how it is configured. And which variables make it so.

apache/kafka

There's now an official image! Find links below for example Compose files.

wurstmeister/kafka

As of Oct 2023, this no longer exists in DockerHub. Wasn't maintained past 2022, anyway.

Refer their README section on listener configuration, Also read their Connectivity wiki.

bitnami/kafka

If you want a small container, try these. The images are much smaller than the Confluent ones and are much more well maintained than wurstmeister. Refer their README for listener configuration.

debezium/kafka

Docs on it are mentioned here.

Note: advertised host and port settings are deprecated. Advertised listeners covers both. Similar to the Confluent containers, Debezium can use KAFKA_ prefixed broker settings to update its properties.

Others

  • ubuntu/kafka requires you to add --override advertised.listeners=kafka:9092 via Docker image args... I find that less portable than environment variables, so not recommended
  • spotify/kafka is deprecated and outdated.
  • fast-data-dev or lensesio/box are great for an all in one solution, with Schema Registry, Kafka Connect, etc, but are bloated if you only want Kafka. Plus, it's a Docker anti pattern to run many services in one container
  • Your own Dockerfile - Why? Is something incomplete with these others? Start with a pull request, not starting from scratch.

For supplemental reading, a fully-functional docker-compose, and network diagrams, see this blog by @rmoff

Answer

The Confluent quickstart (Docker) document assumes all produce and consume requests will be within the Docker network.

You could fix the problem of connecting to kafka:9092 by running your Kafka client code within its own container as that uses the Docker network bridge, but otherwise you'll need to add some more environment variables for exposing the container externally, while still having it work within the Docker network.

First add a protocol mapping of PLAINTEXT_HOST:PLAINTEXT that will map the listener protocol to a Kafka protocol

Key: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
Value: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT

Then setup two advertised listeners on different ports. (kafka here refers to the docker container name; it might also be named broker, so double check your service + hostnames).

Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092

Notice the protocols here match the left-side values of the protocol mapping setting above

When running the container, add -p 29092:29092 for the host port mapping, and advertised PLAINTEXT_HOST listener.


So... (with the above settings)

If something still doesn't work, KAFKA_LISTENERS can be set to include <PROTOCOL>://0.0.0.0:<PORT> where both options match the advertised setting and Docker-forwarded port

Client on same machine, not in a container

Advertising localhost and the associated port will let you connect outside of the container, as you'd expect.

In other words, when running any Kafka Client outside the Docker network (including CLI tools you might have installed locally), use localhost:29092 for bootstrap servers (requires Docker port forwarding)

Client on another machine

If trying to connect from an external server, you'll need to advertise the external hostname/ip (e.g. 192.168.x.y) of the host as well as/in place of localhost.
Simply advertising localhost with a port forward will not work because Kafka protocol will still continue to advertise the listeners you've configured.

This setup requires Docker port forwarding and router port forwarding (and firewall / security group changes) if not in the same local network, for example, your container is running in the cloud and you want to interact with it from your local machine.

Client (or another broker) in a container, on the same host

This is the least error-prone configuration; you can use DNS service names directly.

When running an app in the Docker network, use Docker service names such as kafka:9092 (see advertised PLAINTEXT listener config above) for bootstrap servers, just like any other Docker service communication (doesn't require any port forwarding)


If you use separate docker run commands, or Compose files, you need to define a shared network manually using compose networks section or docker network --create


See the example Compose file for the full Confluent stack using Kraft or more minimal one (with Zookeeper) for a single broker.

For apache/kafka image, there's example files in the Kafka Github repo.

If using multiple brokers, then they need to use unique hostnames + advertised listeners. See example

Related question

Connect to Kafka on host from Docker (ksqlDB)

Appendix

For anyone interested in Kubernetes deployments:

他是夢罘是命 2025-02-18 15:25:30

当您第一次连接到KAFKA节点时,它将为您提供所有Kafka节点和在哪里连接的URL。然后,您的应用程序将尝试直接连接到每个Kafka。

问题始终是Kafka会给您的URL吗?这就是为什么有 kafka_advertised_listeners 将使用Kafka来告诉世界如何访问它的原因。

现在,对于您的用例,有多个小东西要考虑:

假设您设置明文:// kafka:9092

  • 如果您在Docker中有一个应用程序,则可以使用kafka。该应用程序将从 kafka 可以通过docker网络分辨出 kafka 从kafka获得。
  • 如果您尝试从主系统或不在同一Docker网络中的另一个容器中连接,这将失败,因为 kafka name无法解决。

==&gt;要解决此问题,您需要拥有像服务发现的特定DNS服务器一样,但是对于小东西来说,这是很大的麻烦。或者,您将每个 kafka 名称设置为每个/etc/hosts 中的容器IP,

如果您设置 plaintext:// localhost:9092

  • 这将会如果您有端口映射(-p 9092:9092)在启动Kafka时,请在系统上可以,
  • 如果您是从容器上的应用程序测试(是否相同的Docker Network)(是否是同一docker网络)(Localhost是容器本身不是Kafka One),则该操作会失败)

==&gt;如果您有这个并希望在另一个容器中使用kafka客户端,那么解决此问题的一种方法是共享两个容器的网络(同一IP)

最后一个选项:在名称中设置IP: plaintext:// xyza :9092 (kafka广告的URL不能如doc

这对每个人都可以...但是您如何获得xyza名称?

唯一的方法是在启动容器时进行硬码: docker运行.... - net Confluent -ip 10.xyz ... 。请注意,您需要在 Confluent 子网中将IP调整为一个有效的IP。

When you first connect to a kafka node, it will give you back all the kafka node and the url where to connect. Then your application will try to connect to every kafka directly.

Issue is always what is the kafka will give you as url ? It's why there is the KAFKA_ADVERTISED_LISTENERS which will be used by kafka to tell the world how it can be accessed.

Now for your use-case, there is multiple small stuff to think about:

Let say you set plaintext://kafka:9092

  • This is OK if you have an application in your docker compose that use kafka. This application will get from kafka the URL with kafka that is resolvable through the docker network.
  • If you try to connect from your main system or from another container which is not in the same docker network this will fail, as the kafka name cannot be resolved.

==> To fix this, you need to have a specific DNS server like a service discovery one, but it is big trouble for small stuff. Or you set manually the kafka name to the container ip in each /etc/hosts

If you set plaintext://localhost:9092

  • This will be ok on your system if you have a port mapping ( -p 9092:9092 when launching kafka)
  • This will fail if you test from an application on a container (same docker network or not) (localhost is the container itself not the kafka one)

==> If you have this and wish to use a kafka client in another container, one way to fix this is to share the network for both container (same ip)

Last option : set an IP in the name: plaintext://x.y.z.a:9092 ( kafka advertised url cannot be 0.0.0.0 as stated in the doc https://kafka.apache.org/documentation/#brokerconfigs_advertised.listeners )

This will be ok for everybody... BUT how can you get the x.y.z.a name ?

The only way is to hardcode this ip when you launch the container: docker run .... --net confluent --ip 10.x.y.z .... Note that you need to adapt the ip to one valid ip in the confluent subnet.

轮廓§ 2025-02-18 15:25:30

在Zookeeper

  1. Docker容器运行-NAME ZOOKEEPER -P 2181:2181 ZOOKEEPER

之后Kafka

  1. Docker容器运行-Name Kafka -P 9092:9092 -E KAFKA_ZOOKEEPER_COMEPER_COMEPER_CONNECT _your_computer_but_not_localhost! :9092 -E KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR = 1 confluentinc/cp -kafka

在Kafka消费者和Producer config I中

@Bean
public ProducerFactory<String, String> producerFactory() {
    Map<String, Object> configProps = new HashMap<>();
    configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
    configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    return new DefaultKafkaProducerFactory<>(configProps);
}

@Bean
public ConsumerFactory<String, String> consumerFactory() {
    Map<String, Object> props = new HashMap<>();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id");
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    return new DefaultKafkaConsumerFactory<>(props);
}

使用这些规定运行我的项目。祝你好运。

before zookeeper

  1. docker container run --name zookeeper -p 2181:2181 zookeeper

after kafka

  1. docker container run --name kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=192.168.8.128:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip_address_of_your_computer_but_not_localhost!!!:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka

in kafka consumer and producer config

@Bean
public ProducerFactory<String, String> producerFactory() {
    Map<String, Object> configProps = new HashMap<>();
    configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
    configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    return new DefaultKafkaProducerFactory<>(configProps);
}

@Bean
public ConsumerFactory<String, String> consumerFactory() {
    Map<String, Object> props = new HashMap<>();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id");
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    return new DefaultKafkaConsumerFactory<>(props);
}

I run my project with these regulations. Good luck dude.

帝王念 2025-02-18 15:25:30

这使我可以访问 localhost:9092 在我的M1 Mac

Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092

Plus端口转发上的Kafka应用程序中:

ports
   - "9092:9092"

最后,再次,对于我的设置,我必须以这种方式设置侦听器密钥

Key: KAFKA_LISTENERS
Value: PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092

This allows me to access localhost:9092 in Kafka applications on my M1 Mac

Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092

plus port forwarding :

ports
   - "9092:9092"

Finally, again, for my set up, I have to set listeners key this way

Key: KAFKA_LISTENERS
Value: PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092
薄荷港 2025-02-18 15:25:30

解决此问题的最简单方法是使用-h选项向您的经纪人添加自定义主机名

docker run -d \
    --net=confluent \
    --name=kafka \
    -h broker-1 \
    -p 9092:9092 \
    -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
    -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
    -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
    confluentinc/cp-kafka:4.1.0

并编辑您的 /etc /hosts

127.0.0.1   broker-1

并使用:

props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker-1:9092");

the simplest way to solve this is adding a custom hostname to your broker using -h option

docker run -d \
    --net=confluent \
    --name=kafka \
    -h broker-1 \
    -p 9092:9092 \
    -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
    -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
    -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
    confluentinc/cp-kafka:4.1.0

and edit your /etc/hosts

127.0.0.1   broker-1

and use:

props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker-1:9092");
没企图 2025-02-18 15:25:30
      KAFKA_BROKER_ID: 1
      KAFKA_ADVERTISED_HOST_NAME: kafka:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

此配置工作正常,

请确保从内部码头机构连接 kafka:29092

从外部容器 localhost:9092

完整的工作docker compose config config

    version: "3.3"

    services:
      zookeeper:
        image: confluentinc/cp-zookeeper:6.2.0
        container_name: zookeeper
        networks:
          - broker-kafka
        ports:
          - "2181:2181"
        environment:
          ZOOKEEPER_CLIENT_PORT: 2181
          ZOOKEEPER_TICK_TIME: 2000
          ALLOW_ANONYMOUS_LOGIN: yes
        volumes:
          - ./bitnami/zookeeper:/bitnami/zookeeper
    
      kafka:
        image: confluentinc/cp-kafka:6.2.0
        container_name: kafka
        networks:
          - broker-kafka
        depends_on:
          - zookeeper
        ports:
          - "9092:9092"
        expose:
          - "9092"
        environment:
          KAFKA_BROKER_ID: 1
          KAFKA_ADVERTISED_HOST_NAME: kafka:9092
          KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
          KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
          KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
          KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
          KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
          # KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
          # KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
          # KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
          KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
          # KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
          KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
        volumes:
          - ./bitnami/kafka:/bitnami/kafka
    
      kafdrop:
        image: obsidiandynamics/kafdrop
        container_name: kafdrop
        ports:
          - "9000:9000"
        expose:
          - "9000"
        networks:
          - broker-kafka
        environment:
          KAFKA_BROKERCONNECT: "PLAINTEXT://kafka:29092"
          JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
          SPRING_PROFILES_ACTIVE: "dev"
        depends_on:
          - kafka
          - zookeeper
    
      consumer:
        container_name: consumer
        build:
          context: ./consumer
          dockerfile: Dockerfile
        environment:
          - KAFKA_TOPIC_NAME=app
          - KAFKA_SERVER=kafka
          - KAFKA_PORT=29092
        ports:
          - 8001:8001
        restart: "always"
        depends_on:
          - zookeeper
          - kafka
          - publisher
          - kafdrop
        networks:
          - broker-kafka
    
      publisher:
        container_name: publisher
        build:
          context: ./producer
          dockerfile: Dockerfile
        environment:
          - KAFKA_TOPIC_NAME=app
          - KAFKA_SERVER=kafka
          - KAFKA_PORT=29092
        ports:
          - 8000:8000
        restart: "always"
        depends_on:
          - zookeeper
          - kafka
          - kafdrop
        networks:
          - broker-kafka
        volumes:
          - ./testproducer:/producer
    
    networks:
      broker-kafka:
        driver: bridge
      KAFKA_BROKER_ID: 1
      KAFKA_ADVERTISED_HOST_NAME: kafka:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

This configuration works fine

make sure from inside docker you are connecting kafka:29092

from outside container localhost:9092

full working docker compose config

    version: "3.3"

    services:
      zookeeper:
        image: confluentinc/cp-zookeeper:6.2.0
        container_name: zookeeper
        networks:
          - broker-kafka
        ports:
          - "2181:2181"
        environment:
          ZOOKEEPER_CLIENT_PORT: 2181
          ZOOKEEPER_TICK_TIME: 2000
          ALLOW_ANONYMOUS_LOGIN: yes
        volumes:
          - ./bitnami/zookeeper:/bitnami/zookeeper
    
      kafka:
        image: confluentinc/cp-kafka:6.2.0
        container_name: kafka
        networks:
          - broker-kafka
        depends_on:
          - zookeeper
        ports:
          - "9092:9092"
        expose:
          - "9092"
        environment:
          KAFKA_BROKER_ID: 1
          KAFKA_ADVERTISED_HOST_NAME: kafka:9092
          KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
          KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
          KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
          KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
          KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
          # KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
          # KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
          # KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
          KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
          # KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
          KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
        volumes:
          - ./bitnami/kafka:/bitnami/kafka
    
      kafdrop:
        image: obsidiandynamics/kafdrop
        container_name: kafdrop
        ports:
          - "9000:9000"
        expose:
          - "9000"
        networks:
          - broker-kafka
        environment:
          KAFKA_BROKERCONNECT: "PLAINTEXT://kafka:29092"
          JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
          SPRING_PROFILES_ACTIVE: "dev"
        depends_on:
          - kafka
          - zookeeper
    
      consumer:
        container_name: consumer
        build:
          context: ./consumer
          dockerfile: Dockerfile
        environment:
          - KAFKA_TOPIC_NAME=app
          - KAFKA_SERVER=kafka
          - KAFKA_PORT=29092
        ports:
          - 8001:8001
        restart: "always"
        depends_on:
          - zookeeper
          - kafka
          - publisher
          - kafdrop
        networks:
          - broker-kafka
    
      publisher:
        container_name: publisher
        build:
          context: ./producer
          dockerfile: Dockerfile
        environment:
          - KAFKA_TOPIC_NAME=app
          - KAFKA_SERVER=kafka
          - KAFKA_PORT=29092
        ports:
          - 8000:8000
        restart: "always"
        depends_on:
          - zookeeper
          - kafka
          - kafdrop
        networks:
          - broker-kafka
        volumes:
          - ./testproducer:/producer
    
    networks:
      broker-kafka:
        driver: bridge
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文