application.yaml 中的 Spring Boot Kafka StreamsConfig 或 ConsumerConfig 不适用

发布于 2025-01-11 10:30:47 字数 2685 浏览 0 评论 0原文

我有一个非常简单的带有 KTable 的 spring boot 项目,我想在 application.yml 中自定义我的配置,但该配置似乎没有被应用。这是我的配置文件 application.yml

spring:
  kafka:
    bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS:localhost:9092}
    streams:
      application-id: ${APPLICATION_ID:train-builder-processor}
      buffered-records-per-partition: 50

    consumer:
      auto-offset-reset: earliest
      max-poll-records: ${MAX_POLL_RECORDS:50}
      max-poll-interval-ms: ${KAFKA_CONSUMER_MAX_POLL_INTERVAL_MS:1000}
      properties:
        spring:
          json:
            trusted:
              packages:
                - com.example.kafkastream

但是,启动应用程序时,日志输出以下内容:

2022-03-03 08:20:06.992  INFO 32989 --- [           main] s.r.s.m.t.TrainBuilderApplication        : Starting TrainBuilderApplication using Java 16.0.2 on MAPFVFG90ZQQ05P with PID 32989 (/Users/xxx/dev/train-builder-processor/target/classes started by xxx in /Users/xxx/dev/train-builder-processor)
2022-03-03 08:20:06.995 DEBUG 32989 --- [           main] s.r.s.m.t.TrainBuilderApplication        : Running with Spring Boot v2.6.3, Spring v5.3.15
2022-03-03 08:20:06.995  INFO 32989 --- [           main] s.r.s.m.t.TrainBuilderApplication        : No active profile set, falling back to default profiles: default
2022-03-03 08:20:08.856  INFO 32989 --- [           main] org.apache.kafka.streams.StreamsConfig   : StreamsConfig values: 
    acceptable.recovery.lag = 10000
    application.id = test.train-builder-processor
    application.server = 
    bootstrap.servers = [localhost:9092]
    buffered.records.per.partition = 1000
... (a bunch of other configs)

ConsumerConfig:

...
    max.poll.interval.ms = 300000
    max.poll.records = 1000
...

下面是我正在使用的简单应用程序类:

@EnableKafka
@EnableKafkaStreams
@SpringBootApplication
public class TrainBuilderApplication {

    ...

    @Autowired
    private TrainIdMapper trainIdMapper;

    @Autowired
    private TrainBuilder trainBuilder;

    public static void main(String[] args) {
        SpringApplication.run(TrainBuilderApplication.class, args);
    }

    @Bean
    public KTable<String, Train> trainTable(StreamsBuilder kStreamBuilder) {
        return kStreamBuilder
                .stream(Pattern.compile(sourceTopicsPattern), Consumed.with(Serdes.String(), myJsonSerde))
                .map(trainIdMapper)
                .filter((key, value) -> key != null)
                .groupByKey(Grouped.with(Serdes.String(), mySerde))
                .aggregate(() -> null, trainBuilder, trainStore);
    }
}

application.yml 中的值似乎被忽略。这可能是什么原因造成的?我缺少什么?提前致谢!

I have a very simple spring boot project with a KTable and I want to customize my configuration in application.yml, but the config seems to not be applied. This is my configuration file application.yml

spring:
  kafka:
    bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS:localhost:9092}
    streams:
      application-id: ${APPLICATION_ID:train-builder-processor}
      buffered-records-per-partition: 50

    consumer:
      auto-offset-reset: earliest
      max-poll-records: ${MAX_POLL_RECORDS:50}
      max-poll-interval-ms: ${KAFKA_CONSUMER_MAX_POLL_INTERVAL_MS:1000}
      properties:
        spring:
          json:
            trusted:
              packages:
                - com.example.kafkastream

However, when starting the application the log outputs the following:

2022-03-03 08:20:06.992  INFO 32989 --- [           main] s.r.s.m.t.TrainBuilderApplication        : Starting TrainBuilderApplication using Java 16.0.2 on MAPFVFG90ZQQ05P with PID 32989 (/Users/xxx/dev/train-builder-processor/target/classes started by xxx in /Users/xxx/dev/train-builder-processor)
2022-03-03 08:20:06.995 DEBUG 32989 --- [           main] s.r.s.m.t.TrainBuilderApplication        : Running with Spring Boot v2.6.3, Spring v5.3.15
2022-03-03 08:20:06.995  INFO 32989 --- [           main] s.r.s.m.t.TrainBuilderApplication        : No active profile set, falling back to default profiles: default
2022-03-03 08:20:08.856  INFO 32989 --- [           main] org.apache.kafka.streams.StreamsConfig   : StreamsConfig values: 
    acceptable.recovery.lag = 10000
    application.id = test.train-builder-processor
    application.server = 
    bootstrap.servers = [localhost:9092]
    buffered.records.per.partition = 1000
... (a bunch of other configs)

ConsumerConfig:

...
    max.poll.interval.ms = 300000
    max.poll.records = 1000
...

Below is the simple application class I'm using:

@EnableKafka
@EnableKafkaStreams
@SpringBootApplication
public class TrainBuilderApplication {

    ...

    @Autowired
    private TrainIdMapper trainIdMapper;

    @Autowired
    private TrainBuilder trainBuilder;

    public static void main(String[] args) {
        SpringApplication.run(TrainBuilderApplication.class, args);
    }

    @Bean
    public KTable<String, Train> trainTable(StreamsBuilder kStreamBuilder) {
        return kStreamBuilder
                .stream(Pattern.compile(sourceTopicsPattern), Consumed.with(Serdes.String(), myJsonSerde))
                .map(trainIdMapper)
                .filter((key, value) -> key != null)
                .groupByKey(Grouped.with(Serdes.String(), mySerde))
                .aggregate(() -> null, trainBuilder, trainStore);
    }
}

The values from my application.yml seems to be ignored. What could be the cause of this? What am I missing? Thanks in advance!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

潜移默化 2025-01-18 10:30:47

所以我在 如何在属性文件中正确外部化 spring-boot kafka-streams 配置?

显然,使用 KStream 时,消费者和生产者配置与流配置完全分离。要为 kafka 流的消费者设置特定属性,必须使用“附加属性”,如下所示:

spring:
  kafka:
    bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS,localhost:9092}
    streams:
      application-id: ${APPLICATION_ID:train-builder-processor}
      cache-max-size-buffering: 1048576
      cleanup.on-shutdown: ${CLEANUP_ON_SHUTDOWN:false}
      properties:
        max:
          poll:
            records: 50

这有点不直观,但它有效。希望这可以帮助将来的人!

So I figured it out with the help of How do I properly externalize spring-boot kafka-streams configuration in a properties file?.

Apparently, consumer and producer configs are completely separated from streams config when using a KStream. To set specific properties for the consumer of the kafka stream one must use "additional properties" like so:

spring:
  kafka:
    bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS,localhost:9092}
    streams:
      application-id: ${APPLICATION_ID:train-builder-processor}
      cache-max-size-buffering: 1048576
      cleanup.on-shutdown: ${CLEANUP_ON_SHUTDOWN:false}
      properties:
        max:
          poll:
            records: 50

which was a bit unintuitive, but it works. Hope this can help someone in the future!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文